id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
269439952
pes2o/s2orc
v3-fos-license
Assessing the Utility of Oral and Maxillofacial Surgery Posters as Educational Aids in Dental Education for Undergraduate Students: Is it Useless or Helpful? Background: Educational posters play a crucial role in education, information dissemination, and awareness. Their visual appeal efficiently communicates condensed yet vital information on significant topics, making them valuable for teaching sequential concepts. We aimed to assess the effectiveness of educational posters in the oral and maxillofacial surgery department for student education. Methods: The study was carried out during the fall semester of 2022 at Mashhad Dental School, Mashhad, Iran utilizing a questionnaire-based approach. The questionnaire gathered demographic information and assessed students’ perspectives on educational posters. Statistical analysis was performed using SPSS version 23 with a significance level set at 0.05. Results: This study was conducted on 70 students (35 females and 35 males). Gender-based analysis demonstrated significant differences in beauty, adaptability, and learning, with male students scoring lower than females (P values = 0.036 and 0.031, respectively). Further analysis by academic year unveiled higher beauty and adaptability scores among third-year students compared to second-year students, showing statistical significance (P value = 0.035). A two-by-two comparison highlighted that the average beauty score of third and fifth-year students surpassed that of second-year students (P values = 0.041 and 0.038, respectively). In summary, higher academic years correlated with superior performance, emphasizing the potential impact of educational posters on academic outcomes. Conclusion: Posters in the oral and maxillofacial surgery department received commendable ratings in various areas, positively impacting the teaching and learning process. INTRODUCTION Insufficient mastery of clinical topics in oral and maxillofacial surgery among students can give rise to issues within the department, notably affecting their ability to manage critical conditions like syncope or asthma attacks.Furthermore, a lack of awareness regarding optimal dental procedure execution can contribute to an elevated incidence of occupational ailments such as musculoskeletal disorders (MSD).A systematic review's findings reveal that the prevalence of MSD among dental professionals falls within the range of 64% to 93% 1 .Consequently, there is a growing imperative to incorporate ergonomics principles into dental school curricula 2,3 .Visual and handson training methodologies prove markedly more effective than traditional lecture-based instruction when teaching ergonomics 4 .Numerous topics are encompassed within the training curriculum of the oral and maxillofacial surgery department.These include principles of sterilization, history-taking and documentation, techniques for anesthetic administration, managing medical emergencies in dentistry, tooth extraction procedures and tools, ergonomics, and the fundamentals of suturing.A study conducted at the Faculty of Dentistry, Qazvin University of Medical Sciences, yielded insights from students who reported that the skill of suturing proficiently and effectively was the least acquired competence within the oral and maxillofacial surgery department 5 .Educational posters serve as a valuable medium for imparting education, disseminating information, and raising awareness 6 .These posters possess the ability to captivate the viewer's attention with their visual appeal, while succinctly conveying vital subject matter.They prove especially advantageous in elucidating concepts that involve stages or necessitate visual aids for effective instruction.Several studies have scrutinized the efficacy and attention-grabbing potential of dental educational posters, whether used in isolation or in conjunction with other educational methodologies aimed at promoting health in waiting rooms, educating primary school teachers on managing children's dental traumas, instructing dental students in biosafety principles, enlightening patients on the prevention of malignant diseases within the head and neck region, and elucidating diagnostic and treatment modalities for dentists [7][8][9] .These investigations have consistently underscored the utility of educational posters as a significant factor in heightening awareness.However, it is noteworthy that there exists a scarcity of research concerning educational posters in the context of dentistry.Therefore, an in-depth exploration of the effectiveness of educational posters holds promise in guiding decisions regarding their proliferation across diverse subject matter, their integration with other educational multimedia, and the evaluation of students' educational requirements.To the best of the researchers' knowledge, no prior studies have appraised the impact of educational posters within the oral and maxillofacial surgery department as an adjunct to the educational process for students.Consequently, we aimed to bridge this knowledge gap and contribute to advancements in this realm.The primary objective of this study is to explore students' perspectives on the posters within the Department of Oral and Maxillofacial Surgery.Additionally, it seeks to assess the extent to which students engage with and pay attention to these departmental posters.Furthermore, we aimed to examine whether there exists any correlation between students' gender, academic year, and their level of attention to the department's posters, along with their awareness of the concepts presented therein. MATERIALS AND METHODS This descriptive-analytical study was carried out during the fall semester (October to February) of 2022 at the Faculty of Dentistry, Mashhad University of Medical Sciences, Mashhad, Iran employing a questionnaire-based methodology.Ethical approval for the study was granted by the Ethics Committee of Mashhad University of Medical Sciences, indicated by code IR.MUMS.DENTISTRY.REC.1401.092. The questionnaire employed was developed by the researchers.To ensure qualitative validity, ten oral and maxillofacial surgery specialists and one medical education specialist were consulted to provide feedback on grammar, phrasing, and the arrangement of statements within the questionnaire.For quantitative content validity assessment, two metrics, namely Content Validity Ratio (CVR) and Content Validity Index (CVI), were utilized.External reliability was determined by administering the questionnaire to 20 students and employing the testretest method, calculating the correlation coefficient over a 10-day interval, which yielded a correlation coefficient exceeding 0.7.Furthermore, the internal reliability of the questionnaire was assessed through Cronbach's alpha coefficient, which also surpassed the 0.7 threshold.The questionnaire for the study is depicted in Figure 1. www.wjps.ir Seventy questionnaires were distributed among students enrolled in the oral and maxillofacial surgery practical courses who willingly participated in the study.The questionnaire consisted of two parts: the first part gathered demographic information from the students, including their gender and academic year.Subsequently, students responded to survey questions covering various aspects such as presentation, attraction, coherency, esthetic and consistency, visual aspects, text and content images, content educational aspects, learning, multimedia support, and technology.Following the collection of questionnaires, data analysis was conducted using SPSS version 23 (IBM Corp., Armonk, NY, USA) software.Descriptive statistics, including appropriate charts and tables, were employed to elucidate statistical indicators and present the frequency distribution of the data.The normality of the data was assessed using the Shapiro-Wilk test.To analyze and ascertain data correlations, t-tests and Pearson's correlation coefficient were applied.The threshold for statistical significance in all tests was set at less than 0.05. RESULTS The perspectives of 70 dental students from Mashhad who had enrolled in practical courses within the oral and maxillofacial surgery department were examined, with a specific focus on their evaluation of the educational posters displayed within the department.These students comprised 35 (50%) females and 35 (50%) males.Among them, 12 individuals (17.1 percent) were in their second year, 15 (21.4 percent) in the third year, 27 (38.6 percent) in the fourth year, and 16 (22.9percent) in the fifth year of their studies.The data underwent an initial analysis based on gender, wherein the normality of the distribution of quantitative variables was assessed using the Shapiro-Wilk test.With the exception of presentation among males (P value = 0.096), attraction among males (P In Table 1, number, average, standard deviation, median, interquartile range, minimum and maximum value of variables by gender and the results of statistical tests are given.As can be seen, the mean scores of presentation variables, coherency, visual aspects, text and content images, and content educational aspects were lower in male students than female students, but the difference was not significant (P value>0.05 for each).The mean scores of esthetic and consistency and learning in male students were significantly lower than female students (P value=0.036and P value=0.031,respectively).This can be attributed to the greater For the analysis based on student year, as well as gender, we assessed the normality of the distribution of quantitative variables using the Shapiro-Wilk test.This analysis revealed that more than half of the variables exhibited a normal distribution.Further details regarding these results can be found in Table 2. In Table 3, comprehensive data, including the number, average, standard deviation, median, interquartile range, minimum, and maximum values of variables, along with the results of statistical tests, are provided, categorized by student's educational year.Notably, the lowest and highest average scores were observed among second-year and third-year students, respectively; however, no significant difference was identified between students of different years in this regard (P value = 0.683).Similarly, for attraction scores, the lowest and the highest averages were associated with third-year and fifth-year students, respectively, with no significant inter-year variation (P value = 0.266).Mean coherency scores exhibited the lowest and the highest values for fourth-year and second-year students, respectively, yet no statistically significant differences were observed between students from different years (P value = 0.561).In contrast, mean scores for esthetic and consistency were significantly different between second-year and third-year students, with the lowest average scores being attributed to second-year students and the highest to third-year students (P value = 0.035).Furthermore, in a pairwise comparison of students at different stages of their academic journey, it was noted that the average aesthetics score of thirdyear students was significantly exceeded by that of second-year students (P value = 0.041), although no significant differences were observed among students at other stages of study.The mean scores for visual aspects and text and content images were at their lowest for fourth-year students and highest for third-year students.Nevertheless, there was no significant difference between students of different years in this regard (P value = 0.717).Similarly, the average scores for multimedia support and technology were lowest among second-year students and highest among fifth-year students, but the difference between students of different years was not statistically significant (P value = 0.710).The lowest and highest average scores for content educational aspects were associated with fourth and third-year students, respectively, with no significant difference between students of varying years (P-value = 0.329).The lowest and highest average learning scores were observed in second-year and fifth-year students, respectively, and the difference between students of different years was significant in this regard (P-value = 0.032).In a two-by-two comparison of students from different years of study, the average aesthetics score of fifth-year students was significantly higher than that of second-year students (P-value = 0.038).However, students from various years of study did not exhibit any significant differences in this aspect. DISCUSSION As a form of written-visual media, the poster serves as a vehicle for communication between the designer and their audience or audiences, conveying message content through diverse visual elements.The designer's approach to poster creation should be grounded in psychological theories within the realm of education and learning.This approach aims to stimulate learner motivation, enhance cognitive processes, and ultimately facilitate improved learning outcomes.Such an approach enables the adaptation of education to cater to diverse learner styles.Educational posters find common application in environments like classrooms, libraries, and other educational settings.Nevertheless, the efficacy of educational posters remains a subject of debate among scholars and researchers.While some proponents contend that posters effectively enhance learning, others argue that alternative teaching methodologies may yield superior results 10 .Several studies have explored the effectiveness of educational posters in improving learning outcomes.Among the suggested methods for learning are lectures, catalogs, and posters.Traditional face-toface lectures, widely employed across universities for education, serve as the cornerstone of information dissemination to students but come with higher costs and demand additional time for information review and presentation.This is particularly relevant in practical courses, where diverting attention from hands-on training can potentially hinder clinical instruction.On the other hand, the catalog method, while relatively cost-effective, carries the risk of individuals forgetting or misplacing the catalogs, rendering them inaccessible when needed.In contrast, the poster method proves to be a costefficient alternative, readily available in designated locations (as demonstrated in this study within the oral and maxillofacial surgery department).This method offers timely and location-specific access to the necessary information 11 .Young et al. observed that the utilization of educational posters can effectively influence the attitudes and awareness of high school students regarding dental trauma management.The knowledge scores of students in the educational intervention group notably surpassed those in the control group.Nevertheless, the researchers identified an issue with the educational posters, noting that a significant portion of participants, both in the intervention and control groups, failed to address several specific questions, indicating a lack of attention to certain aspects of the posters 12 .Awad and colleagues conducted an investigation into the impact of utilizing educational posters on www.wjps.ir the awareness of dental trauma among secondary school teachers.The employment of educational posters resulted in a notable enhancement of teachers' knowledge and awareness.Interestingly, the teachers who already possessed some relative knowledge regarding dental trauma management reported a more pronounced effect of the educational posters compared to those teachers with limited prior knowledge on the subject.Furthermore, the researchers highlighted the significance of identifying and emphasizing pivotal content within the poster, underscoring the importance of precise poster design 13 .In the present study, akin to this research, higher-year students exhibited superior evaluations of the poster across various domains in comparison to their lower-year counterparts.Generally, the repetition and thorough review of educational materials, coupled with hands-on practical training, can facilitate enhanced learning outcomes and, in turn, indirectly impact a student's comprehension and engagement with educational posters 14 .In a similar study, Ghadimi and colleagues delved into the effectiveness of employing educational posters to enhance the awareness levels of health teachers in schools across Tehran.Prior to the implementation of the educational intervention, the teachers exhibited a limited understanding of how to manage dental trauma incidents.However, following the introduction of educational posters, their knowledge witnessed a substantial and statistically significant increase in comparison to the control group 11 .Hasanica et al. conducted a study to examine the impact of utilizing printed texts and posters on elevating students' awareness of health-related behaviors, encompassing aspects like healthy eating, exercise, and healthy habits.The poster group demonstrated the most significant effects, primarily attributed to the visual appeal, visibility, and prolonged exposure they offered 15 .Based on varying student perspectives, it appears that the incorporation of educational posters as a supplementary tool alongside other educational methods can prove beneficial.To further enhance the efficacy of posters, complementing them with practical activities and hands-on workshops has been suggested, a notion supported by previous research findings 16 .Furthermore, it is worth noting that, despite its efficacy in supplementary education, poster usage may not significantly impact attitude change.This limitation arises from the inherent constraints of poster presentations, where direct interaction between the instructor and the learner is not possible, resulting in an indirect transmission of information that may not deeply influence students' attitudes.Consequently, relying solely on this method is unlikely to yield substantial results 11 . Thus, in light of the reviewed studies in this field, which compare various educational methods, the integration of diverse instructional approaches is deemed essential.Selecting suitable educational interventions to enhance learning outcomes on a broad scale represents one of the most efficient and cost-effective measures for educating learners.Consequently, the quest for an effective, economical, user-friendly, and comprehensive educational solution holds paramount importance in the field of education.In light of the considerable costs and time involved in designing, reproducing, and disseminating educational posters, it is prudent to explore ways to maximize their impact on learners.To achieve this, it is advisable to assess the effectiveness of posters under various conditions and in strategically chosen locations, thus optimizing their placement for greater efficiency.Additionally, augmenting poster campaigns with complementary educational strategies can further enhance their overall effectiveness and influence on learners. CONCLUSION The posters designed in the Oral and Maxillofacial Surgery Department received commendable ratings in various areas, including presentation, coherency, esthetic and consistency, visual aspects, text and content images, content educational aspects, learning, attraction, multimedia support, and technology.These posters had a positive impact on the teaching and learning process. Fig. 1 : Fig. 1: The questionnaire for the study Fig. 1 : Fig. 1: The questionnaire for the study Table 1 : Comparison of each field between female and male students Table 1 : Comparison of each field between female and male students Table 2 : The result of the Shapiro-Wilk test for the normality of the data distribution of quantitative variables according to the student's educational year Table 2 : The result of the Shapiro-Wilk test for the normality of the data distribution of quantitative variables according to the student's educational year Table 3 : Comparison of each field between different students' educational years : e result of one-factor analysis of varianceTable 3 : Comparison of each field between different students' educational years
2024-04-29T15:22:25.056Z
2024-01-01T00:00:00.000
{ "year": 2024, "sha1": "b86ede95a232d02463ee794d6e3cc55db027a706", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "f939e6b4037b77f911c71fbfef4331ec55e74519", "s2fieldsofstudy": [ "Medicine", "Education" ], "extfieldsofstudy": [ "Medicine" ] }
9949293
pes2o/s2orc
v3-fos-license
New Australovenator Hind Limb Elements Pertaining to the Holotype Reveal the Most Complete Neovenatorid Leg We report new skeletal elements pertaining to the same individual which represents the holotype of Australovenator wintonensis, from the ‘Matilda Site’ in the Winton Formation (Upper Cretaceous) of western Queensland. The discovery of these new elements means that the hind limb of Australovenator is now the most completely understood hind limb among Neovenatoridae. The new hind limb elements include: the left fibula; left metatarsal IV; left pedal phalanges I-2, II-1, III-4, IV-2, IV-3; and right pedal phalanges, II-2 and III-1. The detailed descriptions are supported with three dimensional figures. These coupled with the completeness of the hind limb will increase the utility of Australovenator in comparisons with less complete neovenatorid genera. These specimens and the previously described hind limb elements of Australovenator are compared with other theropods classified as neovenatorids (including Neovenator, Chilantaisaurus, Fukuiraptor, Orkoraptor and Megaraptor). Hind limb length proportion comparisons indicate that the smaller neovenatorids Australovenator and Fukuiraptor possess more elongate and gracile hind limb elements than the larger Neovenator and Chilantaisaurus. Greater stride lengths to body size exist in both Fukuiraptor and Australovenator with the femur discovered to be proportionally shorter the rest of the hind limb length. Additionally Australovenator is identified as possessing the most elongate metatarsus. The metatarsus morphology varies with body size. The larger neoventorids possess a metatarsus with greater width but shorter length compared to smaller forms. Introduction The skeletal remains of Australovenator wintonensis [1] were discovered interspersed with the remains of a sauropod dinosaur Diamantinasaurus matildae [1].The fossils were excavated from Australian Age of Dinosaurs Locality 85 (AODL 85) -the ''Matilda site'' on Elderslie station, approximately 60km northwest of Winton, Queensland, Australia.Samples from the Matilda underwent zircon dating indicating a Cenomanian age (ca.95 Ma) for the site (Figure 6 in [2]) [2,3].The deposit was first identified by the landowners, who discovered large fragmented sauropod remains exposed on the surface.Excavation of the site demonstrated that the bones were being reworked from gunmetal bluecoloured clay, rich in plant debris (Figure 1; Figure S1).The plant material consists of a diverse range of macro and micro fauna flora [4][5][6][7][8][9][10].The deposit was interpreted as an abandoned channel fill or oxbow lake [1].Most of the specimens were found to be encased in a concretionary phosphatic crust.Although substantial skeletal remains of Austraovenator were reported in the description of the holotype [1], the preparation of concretions from AODL 85 continued following the publication of the paper, yielding new forelimb [11] and hind limb elements of Australovenator. Herein we describe new hind limb elements pertaining to the holotype individual of Australovenator wintonensis (Australian Age of Dinosaur Fossil 604 [AODF 604]).The new hind limb elements described include: the left fibula; left metatarsal IV; left pedal phalanges I-2, II-1, III-3, III-4, IV-2, IV-3; and right pedal phalanges, II-2 and III-1.We also revise the identifications of the previously described pedal phalanges.These corrections were made based on comparisons with phalanges of Allosaurus fragilis [12], Neovenator salerii [13] and the emu (Dromaius novaehollandiae).The pedal phalanges described in the initial description of Australovenator, when assigned to their correct positions, represent left pedal phalanx, IV-1 and right pedal phalanges I-2, II-3, III-2, III-3, IV-4 and IV-5.The near completeness of the hind limb of Australovenator will increase its utility in comparisons with less complete neovenatorid genera, particularly by ensuring that their pedal element will be interpreted correctly. Fossil Preparation Specimens were prepared using pneumatic air scribes and chisels.They were consolidated with Paraloid B72.Polyethylene Glycol' PEG 3350 'Carbowax' was used to support fragile fossil specimens during preparation, filling in gaps and cracks for extra support and absorbing vibration caused by the pneumatic preparation tools. Specimens All necessary permits were obtained for the described study, which complied with all relevant regulations.Permission to excavate the specimens from Elderslie station was obtained from the landholders.During excavation each specimen is given a preliminary field number for location and storing purposes.Once the specimens have been prepared and formally identified they are donated by the landholder to the Australian Age of Dinosaur Museum of Natural History (AAOD).All specimens pertaining to the holotype Australovenator wintonensis are allocated the specimen number AODF604.The specimens are stored in a climate controlled type room at the Australian Age of Dinosaurs Museum 15km east of Winton, Queensland, Australia. Computed Tomography The Australovenator specimens were computed tomography (CT) scanned at Queensland Xray, Mackay Mater Hospital, central eastern Queensland using a Philips Brilliance CT 64-slice machine which produced 0.9mm slices.MimicsH version 10.01 software, was used to view internal structures in cross-section and to create three dimensional renders.These were subsequently scanned to obtain an external mesh.The meshes were then imported into the graphic design package RhinocerosH 4.0, which was used to develop rendered meshes of fossil specimens enabling the morphology to be clearly viewed alongside actual specimens. 3-d Figures Individual meshes of fossil specimens were loaded into a custom program that loads an Alias Wavefront (.obj)-format mesh and compresses it into the Product Representation Compact (PRC)format (International Organization for Standardization Draft International Standard ISO/DIS 14739-1.3),suitable for embedding in a Portable Document Format (PDF) file as an interactive, 3-dimensional figure.We used a modified version of the program xrw2pdf from the S2VOLSURF tools [14], based on the S2PLOT programming library [15,16]. PRC files were embedded in PDF documents as interactive figures using the LaTeX document preparation system, the movie15 style file for LaTex supporting multimedia enhancements to PDF documents, and the JavaScript file s2plot-prc.jsincluded with S2PLOT.When viewed in Adobe Reader or Adobe Acrobat on desktop systems (Microsoft Windows, Apple Macintosh OS X, Linux), the resultant supplementary 3-d figures enable the interactive rotation, zooming, and relighting of the fossil meshes. Results and Discussion The new hind limb and pedal elements described below were initially identified by comparison with Allosaurus fragilis (Plates 53-55 in [17]) and Neovenator salerii (Figure 25 and Plates 44-45 in [18]).The preservation of Australovenator phalanges enabled rearticulation of adjacent elements.We present detailed figures The following specimens were described in the initial description of Australovenator wintonensis [1]: the right femur (Figure 2; Figure S2), right and left tibiae (Figure 3; Figure S3), right fibula (Figure 4; Figure S4), right astragalus (Figure 5; Figure S5), left metatarsal I (Figure 6; Figure S6), and right metatarsals II (Figure 7; Figure S7) and III (Figure 8; Figure S8).Left metatarsal I was originally identified as a right element [1].These specimens, or their counterparts from the limb of the opposite side, were Left Fibula (Figure 4) The original description of the right fibula (Figure 4A-L) was adequate [1].The left fibula, reported here (Figure 4M-Q), is missing its distal portion: however, the proximal end is complete, unlike the left fibula, revealing a proximally flatter and more rounded lateral surface.Post-mortem distortion has morphed the proximomedial fossa so that it is more ovoid than in the right fibula and has also caused the shaft to be bent distally.Measurements are given in Table 1. Left metatarsal IV (Figure 9; Figure S9) Metatarsal IV was poorly preserved, with the shaft sustaining multiple post-mortem fractures.The distal condyle is missing.The shaft is concave medially at the proximal end of the shaft forming an articular surface for metatarsal III.The lateral surface is slightly concave for its articulation with metatarsal V. Disto-laterally there is a depression just proximal to the lateral condyle.The shaft is elongate and approximately straight, as in metatarsals III and II.The cross-section of the mid-shaft is crescentic for most of its length (Figure 9.2), with the lateral face slightly concave and medial face convex.A distal cross-sectional is oval with very thick cortical bone and a narrow ovoid medullary cavity.The lateral face becomes convex distally, resulting in a rounded cross-section Hind limb measurements which include the specimen lengths and estimates where the specimen is not entirely preserved are marked with an asterisk (*).doi:10.1371/journal.pone.0068649.t001 (Figure 9.3).Although the distal condyles are missing, their proximally preserved portion suggests that the lateral condyle was ventrolaterally angled, and was taller than the medial condyle which is transversely broad.Measurements are given in Table 1. Right metatarsus (Figure 10; Figure S10) Metatarsals I and IV from the left foot have been mirrored to reconstruct an articulated metatarsus.Metatarsal II was digitally straightened as the specimen was deformed with an unnatural medial bend.Despite this deformation, both metatarsals II and III have a distinct articulation both proximally and distally.The mirrored metatarsal IV articulates well at the proximal end.The metatarsus is quite gracile relative to the more robustly built metatarsi of the less derived Neovenator, Chilantaisaurus and Allosaurus.The general morphology of the metatarsus is that of an undrived theropod metatarsi, where the proximal shaft of metatarsal III is visible in cranial and caudal views and the distal portion of the shaft is circular which is distinctly different to the morphological features of an arctometatarsus or subarctometatarsus (Figure 1 in [19]). Measurements are given in Table 2. Left and right pedal phalanx I-2 (Figure 11; Figure S11) The first pedal digit of basal theropods comprises two phalanges.Only the distal (ungual) phalanx, I-2, is known in Australovenator.The right pedal phalanx I-2 (Figure 11K-O) was originally mistaken for the distal tip of manual phalanx II-3 [1].The discovery and description of additional manual elements resulted in its correct identification as a pedal phalanx [11].The right specimen is missing the articular facet; however, the left is complete.The lateral surface is rounded, whereas the medial surface bears a distinct ventromedial ridge.The proximal articular surface is divided into two articular facets.The medial facet is large and depressed whilst the lateral facet is small and is located dorsomedially.It is similar in morphology to Allosaurus fragilis (Plate 54 in [17]).Measurements are given in Table 3. Left pedal phalanx II-1 (Figure 12; Figure S12) In contrast with pedal digit I, the second pedal digit is more completely known, with one example of all three phalanges preserved.The phalanx is broken proximally, meaning that the proximal articular surface is missing.The shaft is elongate and curved medially.The mid-shaft cross section is circular (Figure 12.2) becoming trapezoidal distally (Figure 12.3).The medial distal condyle is taller and broader than the lateral condyle.In distal view, the medial condyle appears to be dorsoventrally orientated, whereas the lateral condyle is slanted laterally.Both condyles have very shallow collateral ligament pits.Measurements in Table 3. Left pedal phalanx II-2 (Figure 13; Figure S13) Pedal phalanx II-2 is poorly preserved, with both the proximal and distal ends incomplete.The specimen is proportionally short compared to II-1, and has a rounded trapezoidal cross-section (Figure 13.1).Although the proximal articular facets are poorly preserved, the lateral appears broader and taller than the medial.At the distal end, both medial and lateral condyles have deep collateral ligament pits.The preserved morphology of the distal condyles suggests that the medial condyle was orientated dorsoventrally and the lateral condyle is angled laterally.A welldefined hyper-extensor pit is present on the dorsal surface immediately proximal to the distal condyles.Measurements in Table 3. Right pedal phalanx II-3 (Figure 14; Figure S14) Pedal phalanx II-3, the ungual of the second digit, is well preserved.It has a subtriangular cross-section (Figure 14.1), is recurved, and tapers distally to a sharp point.The medial and lateral vascular grooves are symmetrical.The proximal articular facet is tall and oval with the medial articular facet being slightly taller than the lateral facet.There is a rounded flexor tubercle on the ventral surface.Measurements in Table 3. Right pedal phalanx III-1 (Figure 15; Figure S15) Examples of all four phalanges of the third digit are preserved.Pedal phalanx III-1 is symmetrical along its sagittal plane and is slender at mid-shaft, with pronounced expansion of the distal condyles.The proximal end is poorly preserved with only a small portion of the articular facet preserved.The mid-shaft is circular in cross-section (Figure 15.1).In distal view the medial condyle is slightly taller than the lateral condyle.Both condyles have deep collateral ligament pits.Measurements in Table 3. Right pedal phalanx III-2 (Figure 16; Figure S16) Pedal phalanx III-2 is complete.It is elongate and nearly symmetrical.The mid-shaft is circular in cross-section (Figure 16.1).A shallow triangular depression is located proximally on the ventral surface of the shaft.The proximal articular facet is slightly taller on the medial side.It is concave and does not possess distinct facets for the medial and lateral condyles of manual The pedal phalanx measurements which include the specimen lengths and estimates where the specimen is not entirely preserved are marked with an asterisk (*).Portions not preserved are denoted by NP. doi:10.1371/journal.pone.0068649.t003 phalanx III-1.A well-defined hyperextensor pit is located on the dorsal surface immediately proximal to the distal condyles.The lateral condyle is slightly taller than the medial condyle.Both medial and lateral distal condyles have deep collateral ligament pits.Measurements in Table 3. Right and left pedal phalanges III-3 (Figure 17; Figure S17) Right pedal phalanx III-3 is complete.The shaft is subcircular in mid-section (Figure 17.1).A well-defined hyperextensor pit is located on the dorsal surface immediately proximal to the distal condyles.A shallow depression is located at the proximal end of the ventral surface.A groove originates on the medial surface terminates on the ventrodistally emphasizing the ventral heel.The ventral heel terminates around mid-length of the phalanx.The proximal articular surface has two articular facets.The medial articular facet is taller and narrower relative to the broader lateral facet.The distolateral condyle is slightly taller than the distomedial condyle.Both condyles possess deep, well-defined collateral ligament pits.The left pedal phalanx is poorly preserved, with only a very fine veneer of bone preserved distal to the articular surface.In proximal aspect, the articular surface reveals the exact height of the proximal end, which was not preserved in the right specimen.Measurements in Table 3. Left pedal phalanx III-4 (Figure 18; Figure S18) Pedal phalanx III-4 is a nearly complete ungual.It is recurved laterodistally and tapers along its length to a sharp point.A prominent ridge extends along the medial edge and tapers distally to a point.The lateral ridge is less prominent than this ridge in phalanx II-3 and unlike that of phalanx I-2.A distinct, pinched tubercle is present proximally on the ventral surface of phalanx III-4.The lateral rim of the proximal articular surface bears distinct rugose growth extending from the lateral articular facet surface.This rugosity is unusual and has not been observed in any of the other unguals of Australovenator.Therefore, this structure may be pathological in nature, perhaps a result of infection or arthritis surrounding the articular facet of the ungual phalanx.The shape of the proximal articular facet has been distorted by the bony growth.The distal end of left pedal phalanx-III-3 was not preserved to compare the potentially pathological right pedal phalanx III-4 confirm whether a corresponding pathology might have been present there (Figure 18).Measurements in Table 3. Left pedal phalanx IV-1 (Figure 19; Figure S19) One representative of each of the phalanges of the fourth pedal digit is preserved.Pedal phalanx IV-1 is complete.The proximal articular surface is tall, with a rounded, subtriangular outline in proximal view.The cross-section becomes circular at its midlength (Figure 19.1).The proximolateral articular facet is angled more laterally than medial facet.The medial and lateral articular facets on the proximal articular surface are not distinctly separated; however, a faint, dorsoventrally oriented ridge weakly divides them (Figure 19J).The medial distal condyle has a deep collateral ligament pit whereas the lateral collateral ligament is relatively shallow and obscured by matrix.The medial condyle is distinctly taller and distomedially splayed compared to the lateral condyle which is angled laterodistally.Measurements in Table 3. Left pedal phalanx IV-2 (Figure 20; Figure S20) Pedal phalanx IV-2 is a poorly preserved specimen with both proximal and distal ends incomplete.In proximal view, the lateral articular facet appears broader than the medial.The shaft has a rounded, sub-trapezoidal cross-section at its mid-length.The medial distal condyle is taller and posteroventrally oriented compared to the lateral condyle, which is splayed laterally.The medial condyle has a deep collateral pit preserved.The ventral heel consists of a ventromedial and ventrolateral processes.The lateral process is slightly more bulbous than the medial.Measurements in Table 3. Left pedal phalanx IV-3 (Figure 21; Figure S21) Pedal phalanx IV-3 is near complete.The proximal articular surface is triangular.The lateral articular facet is slightly broader than the medial facet.The medial distal condyle is taller and slightly broader than the lateral distal condyle.Both medial and lateral condyles have deep collateral ligament pits.The lateral surface of the phalanx is slightly concave dorsoventrally.This can be seen in the mid-shaft cross-section (Figure 21.1).Measurements in Table 3. Right pedal phalanx IV-4 (Figure 22; Figure S22) Pedal phalanx IV-4 is complete.The proximal articular surface is subtriangular in proximal view.It bears paired medial and lateral facts.The medial facet is proportionally taller and narrower than the lateral facet.A prominent proximomedial eminence is present on the ventral surface.The medial surface has prominent ridge that originates from the medial articular facet rim and tapers into the ventral portion of the medial condyle.The ridge is bounded on its dorsal and ventral sides by shallow depressions the ventral larger than the dorsal.The lateral distal condyle is slightly broader, but shorter than the medial distal condyle.Measurements in Table 3. Right pedal phalanx IV-5 (Figure 23; Figure S23) Pedal phalanx IV-5 is a complete ungual phalanx.It is recurved and tapers distally to a distinct point.It possesses near symmetrical medial and lateral grooves.In ventral view, the ungual is spearshaped.The lateral leading edge is slightly longer than the medial side due to weak distomedial curvature.There is a distinct ventrally pinched flexor tubercle at the proximal end of the ventral surface.The medial articular facet is angled medially in comparison to the lateral facet.Measurements in Table 3. Reconstructed metatarsus and pes (Figure 24, 25, Figure S24) A near complete right pes has been reconstructed using specimens from the right pes and mirrored elements only known from the left pes.Unfortunately pedal phalanx I-1 is unknown.Despite this, the pes is the most complete amongst neovenatorids. Femur Australovenator, Neovenator, Fukuiraptor and Chilantaisaurus have preserved femora.Australovenator and Fukuiraptor have a groove visible in proximal view that tapers disto-laterally on the caudal surface.The groove is bounded by a pronounced flange which is labelled the posterior flange of the caput in Brusatte (Figure 21B in [18]).This groove and flange are less prominent in Neovenator and Chilantaisaurus.Australovenator and Fukuiraptor differ in that this flange buttresses the femoral head medially in Fukuiraptor, whereas it curves caudally around the femoral head in Australovenator (Figure 2IJ); (Figure 12C in [23]); and (Figure 21E in [18]).However, we note the high level of morphological variability among specimens of Fukuiraptor [23]. The outline of the femur in distal view is similar in Australovenator and Neovenator, in which the medial and lateral condyles have a bulbous appearance, resulting in a narrower flexor groove and a shallow extensor groove (this is autapomorphically shallow and narrow in Neovenator (Figure 21 in [18]).The femora of Fukuiraptor and Chilantaisaurus have widely spaced distal condyles, creating a much larger flexor groove and extensor grooves (see Figure 2KL; Figure 12D in [23]; Figure 21F in [18]; Figure 4FG in [26]).The cranial surface of the femoral shaft of Neovenator bears a distinct, cranial intermuscular line (Figure 21A in [18]).This is not visible in Australovenator or Chilantaisaurus, perhaps due to poor preservation or less advanced ossification, and was not illustrated in Fukuiraptor [23].The position of the fourth trochanter and its associated muscle scar are similar in Australovenator (Figure 2CD) and Neovenator (Figure 21B-D in [18]), whereas it is more proximally positioned in Fukuiraptor (Figure 12A in [23]).The crista tibiofibularis is similar in both Neovenator (Figure 21BC in [18]) and Australovenator (Figure 2CD).It is difficult to identify the shape of this process in Fukuiraptor and Chilantaisaurus, because of damage.Australovenator, Fukuiraptor and Neovenator possess a distally projecting medial condyle which is a morphological trait used in defining the Neovenatoridae clade, that also appears in some carcharodontosaurids [20]. Tibia Australovenator, Neovenator Fukuiraptor, Orkoraptor and Chilantaisaurus all preserve at least partial tibiae.In proximal view, the tibial head of Australovenator is closer in morphology to Fukuiraptor.The tibial head in both Neovenator (Figure 22E in [18]) and Orkoraptor (Figure 7C in [25]) in proximal view has the medial condyle more caudally positioned in than in Australovenator (Figure 3IJ) and Fukuiraptor (Figure 13C in [23]).The lateral condyle of proximal femur bears a spine-like anteroventral process in Australovenator (Figure 3B) and Fukuiraptor (Figure 13E in [23]) whereas in Neovenator (Figure 22C in [18]) it forms a sharp hook.This feature is not preserved in Chilantaisaurus.In distal view there do not appear to be major differences between Neovenator, Chilantaisaurus and Australovenator; the distal end of the tibia was not preserved in Fukuiraptor. Fibula Complete fibulae are known for Australovenator (Figure 4) and Neovenator (Figure 23 in [18]) and a partial fibula of Chilantaisaurus is also known.The preservation of Chilantaisaurus does not allow meaningful morphological comparisons to be made.The majority of the Australovenator shaft is more elongate and slender than Neovenator. Astragalus Astragali are known in Fukuiraptor (Figure 14 in [23]) and Australovenator (Figure 5).There is a distinct dorsoventrally oriented ridge on the caudal surface that curves proximally from a central position to a lateral position in Australovenator which does not appear to be present in Fukuiraptor (Figure 14C in [23]).In cranial view, the proximal cranial groove appears shallowly concave in Fukuiraptor whereas it has a distinct distal groove on the lateral side of the Australovenator specimen. Metatarsus Comparisons were made of metatarsi from the primitive (and geologically older) to derived (and geologically younger; although Chilantaisaurus is slightly younger than Australovenator) allosauroid theropods Allosaurus (Late Jurassic, U.S.A), Neovenator (Early Cretaceous, United Kingdom), Chilantaisaurus (early Late Cretaceous, China), Megaraptor (early Late Cretaceous, Argentina) and Australovenator (Upper Cretaceous Australia) (Figure 26).The proximal surface of metatarsal II of Australovenator was not preserved however the medial margin of metatarsal III was straight implying that the lateral margin of metatarsal II was also straight.This straight feature was also recognised in Fukuiraptor (Figure 16F in [23]).This margin is slightly bulbous in both Neovenator (Plate 42, 5 in [18]) and Chilantaisaurus (Figure 6B in [26]), however Neovenator has a slight medial groove, which is absent in Chilantaisaurus.In proximal view, metatarsal III is bulbous dorsally on the lateral margin in Australovenator.This feature is shared with Neovenator (Plate 44, 3 in [18]).In Chilantaisaurus (Figure 6B in [26]) this bulbous feature is reduced, creating a slightly curved lateral margin.Unfortunately this feature is too hard to distinguish in Fukuiraptor (Figure 16L in [23]).The morphology of the distal surface of metatarsal III of Megaraptor (Figure 2 in [24]) is nearly identical to that of Australovenator.The proximal end of metatarsal IV of Australovenator has a rounded, subtriangular outline, with a concave medial margin buttressing the bulbous dorsal end of metatarsal III.Metatarsal IV of Megaraptor (Figure 10B in [27]) shares the closest morphology to Australovenator with the same concave medial margin to buttress the bulbous end of metatarsal III.Neovenator also shares this concavity, though the lateral margin of this element is more rounded and oval (Plates 44, 3 in [18]).Metatarsal IV of Chilantaisaurus lacks this concave medial margin and is trapezoidal (Figure 6B in [26]). Pedal phalanges Comparison of neovenatorid pedal phalanges is difficult if recovered specimens do not directly articulate.It is therefore difficult to accurately compare the Fukuiraptor specimens (Figure 18 in [23]) with Australovenator.The only other neovenatorid currently represented by articulated pedal phalanges is Neovenator, which facilitated the following comparisons.In some cases the poor preservation of Australovenator specimens meant that meaningful comparisons were not achievable. Pedal phalanx II-1 In cranial view, pedal phalanx II-1 is elongated and distinctly bows medially in Australovenator, whereas it is relatively straight and more robust in Neovenator (Table 4).The Australovenator specimen also appears proportionally narrower at its proximal end compared to the wider proximal end of Neovenator (Plate 44, 1 in [18]).In medial and lateral views, the distal condyles of Neovenator (Plates 44, 4 and 45, 2 in [18]) are pronounced more caudally with respect to the shaft of the phalanx than in Australovenator.The proximo-ventral articulation of Neoventor is also more ventrally pronounced on the phalanx than in Australovenator.The proximal surface of the Australovenator specimen is proportionally taller compared to that of Neovenator (Table 4). Pedal phalanx IV-1 The Australovenator specimen curves distolaterally along its length whereas the Neovenator specimen appears relatively straight.The Neovenator specimen (Plates 44, 1 and 45, 8 in [18]) also has a much blockier proximal end than the proportionally taller Australovenator specimen (Table 4).Pedal phalanx IV-3 The pedal phalanx IV-3 of Australovenator is more elongate than that of Neovenator.The Neovenator specimen has a very short shaft which is almost indistinguishable from the proximal and distal ends (Plates 44, 9 in [18]) (Table 4). Pedal phalanx IV-4 The pedal phalanx IV-4 of Australovenator is proportionally narrower and more elongate than Neovenator.It has a small section of phalanx shaft visible between the distal condyles and proximal articular facet whereas the shaft is indistinguishable in Neovenator.The Australovenator specimen also appears proportionally taller in proximal aspect (Table 4). Hind limb element proportions (Table 2) The hind limb element proportions of Australovenator were compared with Neovenator, Chilantaisaurus, Fukuiraptor and Allosaurus. The Fukuiraptor specimen measurements from the holotype description were used for this analysis.Unfortuneatly the tibia is incomplete but it was referred to as being a similar length to the femur [23].As this is not an exact measurement the percentage result should be used with caution.To achieve a comparable limb proportion with Fukuiraptor we calculated a metatarsus to femur proportion percentage.Interestingly this proportion indicates that Fukuiraptor has the most elongate metatarsus to femur length next to Australovenator.The proportions indicate a larger body plan is supported with a wider but shorter metatarsus. Neovenator is the only other neovenatorid theropod in which the positions of the pedal phalanges have been determined.A comparison of their dimensions (height, width and length) with those of Australovenator indicated that the pedal phalanges of Australovenator are more elongate than Neovenator (Table 4). Conclusion These newly described hind limb elements of Australovenator, together with the holotype specimens previously described, mean that the hind limb of Australovenator is the most complete of any neovenatorid known to date.The discovery of the new hind limb elements enabled exact skeletal positions to be determined for each of the holotype pedal phalanges, as well as those newly described here.This will provide a point of comparison for future neovenatorid pedal phalanges and should ensure more accurate determination of pedal position of isolated phalanges.The morphology of the metatarsus was found to be similar in Australovenator and Megaraptor, as demonstrated by metatarsals III and IV.Comparisons of Australovenator specimens with published figures and measurements of, Neovenator, Fukuiraptor and Chilantaisaurus revealed that in relation to body size Australovenator had the most elongate hind limb and stride length.Additionally the hind limb proportions indicate that larger forms possessed a shorter but wider metatarsus in comparison to proportionally smaller neovenatorids. The morphological descriptions provided here are supplemented with two-and three-dimensional figures.The 3-D figures will allow other researchers to more accurately observe the hind limb elements of Australovenator than would otherwise be possible in a locality remote from the AAOD museum in which the specimens are housed. Table 2 . Hind limb lengths and hind limb element ratios. Length, width and proximal height comparisons of Australovenator and Neovenator.The ratios reveal that Australovenator possessed a more elongate pes.Asterisks (*) mark lengths which have been estimated due to poor preservation.doi:10.1371/journal.pone.0068649.t004
2018-04-03T00:11:01.820Z
2013-07-24T00:00:00.000
{ "year": 2013, "sha1": "5ca5c1f8e5cc7bbe89fa479fe849228683b911c7", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0068649&type=printable", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "b04bb95cc4d85ca10206e0e1535ed3198a823282", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
54060612
pes2o/s2orc
v3-fos-license
Petrov type I Condition and Rindler Fluid in Vacuum Einstein-Gauss-Bonnet Gravity Recently the Petrov type I condition is introduced to reduce the degrees of freedom in the extrinsic curvature of a timelike hypersurface to the degrees of freedom in the dual Rindler fluid in Einstein gravity. In this paper we show that the Petrov type I condition holds for the solutions of vacuum Einstein-Gauss-Bonnet gravity up to the second order in the relativistic hydrodynamic expansion. On the other hand, if imposing the Petrov type I condition and Hamiltonian constraint on a finite cutoff hypersurface, the stress tensor of the relativistic Rindler fluid in vacuum Einstein-Gauss-Bonnet gravity can be recovered with correct first order and second order transport coefficients. Introduction There has been increasing interest on the holographic duality between fluid dynamics and gravity in the past few years, while the suggestion of such a connection can be dated back to the 1970s proposed by Damour [1,2]. The approach is developed into the membrane paradigm [3], which relates the black hole evolution and diffusion with those in hydrodynamics [4,5,6,7,8]. In recent years, along with the progress in the anti-de Sitter/Conformal Field Theory (AdS/CFT) correspondence [9,10,11,12], the dual fluid has been generalized to the conformal fluid living on the boundary of AdS spacetime, which can describe the long wavelength and low frequency limit of conformal field theory [13,14,15]. In particular, a systematic method to study the duality was proposed in the fluid/gravity correspondence [16], which translates problems in fluid dynamics into problems in general relativity. It was then further expanded to arbitrary dimensions in [17,18,19] and to non-relativistic hydrodynamics in [20]. To build up the connection between the fluid/gravity correspondence and membrane paradigm, a timelike hypersurface outside the horizon is introduced to study the universality of the hydrodynamic limit in AdS/CFT correspondence and membrane paradigm [21,22,23]. Significantly, the authors in [23] consider the fluid living on the finite cutoff hypersurface from the viewpoint of Wilsonian renormalization, there Dirichlet boundary condition on the hypersurface and the regularity on the horizon are imposed. Then the fluid/gravity correspondence on the cutoff hypersurface can be generalized to either asymptotically flat [24,25] or de Sitter spacetime [26], and it has been further studied in [27,28,29,30,31,32,33,34,35]. More general discussions in fluid/gravity correspondence can also be found in [36,37,38,39,40], as well as in the frame of AdS/Ricci-flat correspondence [41,42]. In the fluid/gravity duality, one of the most important developments is the so-called Rindler hydrodynamics [24,43,44,45,46,47,48], where the dual fluid lives on a constant acceleration hypersurface with a flat induced metric. More interestingly, it is found in [49] that in the near-horizon limit, instead of the regularity condition on the horizon, imposing the Petrov type I condition on the hypersurface can reduce the vacuum Einstein equations to the incompressible Navier-Stokes equations in one lower dimensional flat spacetime. It is mathematically much simpler than solving gravitational field equations. Further study based on this framework can be found in [50,51,52,53,54,55]. From the point of view of degrees of freedom, the Petrov type I condition gives (p + 2)(p − 1)/2 constraints on the extrinsic curvature of a p + 1 dimensional timelike hypersurface, or equivalently on the dual Brown-York stress tensor. Then the degrees of freedom of the stress tensor are reduced to be p + 2, which can be interpreted as energy density, pressure and velocity field of dual fluid [49]. Furthermore the momentum constraint turns out to be the equation of motion of the dual fluid, and the Hamiltonian constraint can be interpreted as the equation of state. Recently, it has been shown in [56,57] that, the Petrov type I condition can be used to recover the stress tensor of the dual fluid on the hypersurface order by order under appropriate gauge choice. Without solving the perturbative gravitational field equations, the Rindler fluid in vacuum Einstein gravity can be recovered at least up to the second order in the relativistic hydrodynamic expansion [57]. Note that the stress tensor of Rindler fluid in vacuum Einstein-Gauss-Bonnet gravity is found to be modified by the Gauss-Bonnet coefficient α in [44,47]. It is then quite interesting to ask whether the Petrov type I condition holds or not in the vacuum Einstein-Gauss-Bonnet gravity and whether it can be used to recover the dual stress tensor. In this paper, we find that the Petrov type I condition for the solution of vacuum Einstein-Gauss-Bonnet equations still holds up to the second order in the relativistic hydrodynamic expansion, and that turn the logic around, imposing the Petrov type I condition and Hamiltonian constraint, the stress tensor of the relativistic Rindler fluid can be recovered with correct first order and second order transport coefficients including the Gauss-Bonnet term corrections. To be specific, in section 2, we firstly review the Rindler fluid in vacuum Einstein-Gauss-Bonnet gravity, and show that the spacetime with perturbation is at least Petrov type I up to the second order in the relativistic hydrodynamic expansion. In section 3, we give a detailed derivation of the Petrov type I condition on a cutoff hypersurface in vacuum Einstein-Gauss-Bonnet gravity. In section 4, we turn the logic around and assume the Hamiltonian constraint and Petrov type I conditiont on a finite cutoff hypersurface to recover the stress tensor of the dual fluid without using the details of the solution. We further study the Petrov type I condition in non-relativistic hydrodynamic expansion in section 5, and make the conclusion in section 6. Rindler fluid in Einstein-Gauss-Bonnet gravity To study the fluid dual to vacuum Einstein-Gauss-Bonnet gravity, we begin with the Einstein-Hilbert action on a (p + 2) dimensional Lorentz manifold M, with the Gauss-Bonnet term L GB = R 2 − 4R µν R µν + R µνσλ R µνσλ and appropriate surface term [58] where α is the Gauss-Bonnet coefficient. Varying this action with respect to the metric g µν yields the vacuum Einstein-Gauss-Bonnet field equations, The p + 2 dimensional Rindler metric is an exact solution of the field equations (2). On a timelike hypersurface Σ c with r = r c , the induced metric is intrinsic flat, And after setting 16πG p+2 = 1, the Brown-York stress tensor of Einstein-Gauss-Bonnet gravity on the cutoff surface Σ c can be written as [59,27], Here K ab is the extrinsic curvature of the hypersurface Σ c . Rindler fluid in relativistic hydrodynamic expansion In order to study the dual fluid on the hypersurface Σ c , one introduces the (p+1) independent parameters u a = γ v (1, v i ) and Ô, which are slowly varying functions of x a = (τ, x i ). Here γ v is fixed through γ ab u a u b = −1. Keep the induced metric on a timelike hypersurface Σ c flat and regularity on the future horizon, the solution of vacuum Einstein-Gauss-Bonnet field equation (2) up to the second order in the derivative expansion is given by [45,46], The leading order term of g ab in the derivative expansion is where the projection tensor h ab ≡ γ ab +u a u b . We can read out the horizon position through ab in the case of equilibrium state. The first order term of g ab in the derivative expansion is where D ≡ u c ∂ c and the acceleration a a ≡ u b ∂ b u a . At the second order in the derivative expansion, the Gauss-Bonnet corrections appear in the metric [47], h c a u d g Here the fluid shear and vorticity are defined as The components of inverse metric up to the second order in the derivative expansion are bc u c , g ab = h ab − h ac h bd g (2) cd . one also needs to consider the following constraint equations with D ⊥ a ≡ h c a ∂ c , so that the metric (8) solves the vacuum Einstein-Gauss-Bonnet field equations (2) up to the second order in the derivative expansion. With the metric (8) and appropriate gauge choice, the dual stress tensor T (GB) ab in the vacuum Einstein-Gauss-Bonnet gravity on the finite cutoff surface Σ c in (6) has been obtained in [46], On the other hand, the general stress tensor T (R) ab for (p + 1)-dimensional relativistic fluid with vanishing equilibrium energy density is constructed in [45] as Compare T (GB) ab in (18) with T (R) ab , one can read out the holographic transport coefficients of Rindler fluid in vacuum Einstein-Gauss-Bonnet gravity as It turns out that there are no Gauss-Bonne corrections to the shear viscosity η and the parameter ζ ′ , the latter measures variations of the energy density. The Gauss-Bonnet corrections appear in the second order transport coefficients c 1 and c 3 . The solution is Petrov type I The Petrov type classification of Weyl tensor in higher dimensions is summarized in Appendix A. In this subsection, we will show that the Weyl tensors C µναβ of the metric g µν in (8) is at least Petrov type I . Choose (p + 2) Newman-Penrose-like vector fields, which include two null vectors ℓ 2 = k 2 = 0, and p orthonormal space-like vectors m i . The null vectors obey ℓ µ k µ = 1 and all other products with Then the Weyl tensor C µναβ is at least Petrov type I if there exists a frame {ℓ, k, m i } such that P (r) ij = 0. A special kind of frame has been chosen in [57]. If we denote n as the spacelike unit normal vector of a constant r hypersurface, u is the normalized (p + 2) velocity along with the hypersurface, the two null vector fields can be chosen as their For the remaining orthonormal spatial vectors m i , there exists still a freedom to choose. the components of the frame have been chosen as follows [57], And the components with subscript index are Up to order ∂ 2 , one can check that g µν m µ i m ν j = δ ij is satisfied, and metric (8) as well as its inverse (16) can be decomposed as To check the Petrov type I condition P (r) ij = 0 of the Weyl tensor, we introduce another covariant formula P (r) ab , which is defined as Then after a straightforward calculation of the Weyl tensors with metric (8), we find Considering g (2) cd with Gauss-Bonnet corrections in (14), we can conclude that P at every spacetime point in (8). As a result, we have shown that the Weyl tensor or the spacetime with metric (8) is at least Petrov type I up to ∂ 2 , even when the Gauss-Bonnet term is included. Petrov type I condition on the hypersurface Σ c The Petrov type I condition is introduced to reduce the degrees of freedom in the extrinsic curvature of the hypersurface Σ c to the degrees of freedom in the dual fluid on Σ c in [49]. On this hypersurface, the covariant Petrov type I condition is defined as [57], With (22) and consider the fact that we need to rewrite the Weyl tensor in terms of the extrinsic curvature K ab , through using the Gauss-Codazzi equations on the intrinsic flat hypersurface Σ c . Thus, we firstly define the following notations with γ α a = δ α a − n a n α = δ α a , as well as their contractions, Then using the equations of motion (2) which lead to we can obtain the projections of the Weyl tensor on the hypersurface Σ c , This is similar to the derivation in [52] for the case of Einstein gravity with matter. Then put (34) into (29) and consider (30), we obtain P ab = P For convenience, we here have defined as well as On the other hand, the Hamiltonian constraint for the vacuum Einstein-Gauss-Bonnet field equations (2) is With the decomposition of the Riemann tensor in Appendix B, we obtain H = H (α) +δH (H) , where [60] While the momentum constraint for the equations of motion (2) turns out to be where T (GB) ab is the one given in (6). Notice that P (α) ab in (35) has become the hypersurface function of extrinsic curvature K ab , but it is not true for δP (H) ab in (36). For example, we can see from [60] that the term appears in 2αH ⊥ ab , Y ab can not be obtained only from the extrinsic curvature K ab and other intrinsic quantities, because additional information of the bulk gravity such as R µν , or the analytic continuation of K ab out of the hypersurface along n is needed. Thus the purpose of Petrov type I condition that gives constraints to the extrinsic curvature can not be realized in this scene. However, if we consider only the small Gauss-Bonnet parameter α limit, and take the Petrov type I condition up to the first order in the α expansion, the above difficulty can be relieved. To see this, we firstly define all the quantities with bars have the same formulas as those without bars when α = 0. Then put (33) into (42) and (3), we obtainȲ ab = −M ab , as well as With the calculations in Appendix B, the equation (36) becomes ab is the first order in the small α expansion that Now we can say that δP (H) ab is a function of K ab , γ ab as well as u a . On the other hand, notice that the extrinsic curvature K ab can be decomposed as whereK ab is the contribution from vacuum Einstein gravity, and δK (α) ab includes the terms from the Gauss-Bonnet term at first order in small α expansion. Then from (35) Finally, the covariant Petrov type I condition (29) up to the first order in small α becomes Similarly, the Hamiltonian constraint (39) up to the first order in small α becomes, whereH With the expansion of K ab in (46), the Brown-York stress tensor (6) can also be expanded as whereT ab is just the Brown-York stress tensor of Einstein gravity, and δT ab comes from the Gauss-Bonnet term at the first order in small α, In the following section, with the Petrov type I condition (49) and Hamiltonian constraint (50), as well as the stress tensor (53), we will directly recover the stress tensor (18) of Rindler fluid in vacuum Einstein-Gauss-Bonnet gravity. Notice that in the Einstein gravity,K ab can be expressed in terms of its Brown-York stress tensor throughT ab = 2(Kγ ab −K ab ). But if we consider the Gauss-Bonnet corrections in (6), as the cube terms of K ab appear in J ab , one cannot obtain the extrinsic curvature K ab in terms of the stress tensor T (GB) ab in (53) at finite α. But, up to the first order in small α and from (53), we can have such that the Petrov type I condition on the hypersurface can also be expressed in terms of the Brown-York stress tensor in Einstein-Gauss-Bonnet gravity T (GB) ab =T ab + δT ab . Although it is not necessary in our next section 4.2, the formulas in terms of the stress tensor would be much more in accord with the original purpose of the Petrov type I condition [49]. This also gives us the other motivation to take the small α limit, and we will use this strategy when study the Petrov type I condition in the non-relativistic hydrodynamic expansion in section 5. From Petrov type I condition to Rindler fluid In this section, we will show how to recover the stress tensor dual to the bulk metric in (8) by use of the Petrov type I condition without the details of the solution (8). We firstly set α = 0 to obtain the Rindler fluid in vacuum Einstein gravity from Prtrov type I condition and Hamiltonian constraint. Then regarding α as a small parameter, the Gauss-Bonnet corrections to the stress tensor up to first order in small α can also be obtained naturally. Recover the Rindler fluid in vacuum Einstein gravity Firstly, setting α = 0 in (49), we have the Petrov type I condition on the finite cutoff hypersurface Σ c in the vacuum Einstein gravity, where similar to (37), we have defined On the other hand, from (56), we have Then we can reach the covariant Petrov type I condition that [57] Now we decompose the arbitrary stress tensorT ab associated with a (p + 1)-velocity u a asT ab = u a u b + 2 (a u b) + Π ab ,T = − + Π. where we have defined Substituting (62) into (61) we have Similarly, when α = 0, the Hamiltonian constraint in (50) becomes Expanding the undetermined stress tensorT ab in (62) in terms of the derivative expansion parameter ∂ as and assuming that the zeroth order of the stress tensor has the same form as that in the Rindler fluid (19), we can recover the first and second order terms of total stress tensor (18) with α = 0, by imposing the Hamiltonian constraint (65) and Petrov type I condition (64). As there is an arbitrary for frame choice of the fluid velocity, we define the relativistic fluid velocity u a such that a = u cT cd h d a ≡ 0 at arbitrary orders, and choose appropriate isotropy gauge that there is no higher order correction to the term which is proportional to h ab , that is only Ôh ab appears in the stress tensor [45]. To be specific, we can go as follows. i) First order. We put (66) and (67) into the Hamiltonian constraint (65) and Petrov type I condition (64), and then expand them in the derivative expansion. Assuming (1) a = 0, at the first order, we haveH Choosing the isotropy gauge such that Π (1) = (1) = 0, we reach Π With the results in the first order and assuming (2) a = 0, we can obtain the second order terms through Choosing the isotropy gauge such that Π (2) = (2) = −2Ô −1 K ab K ab , and employing the derivatives of momentum constraint equation (17) which lead to the identities, we finally reach the stress tensor up to the second order in the derivative expansion, Comparing the above stress tensorT ab with the general stress tensor T (R) ab in (19), one can read out exactly the same coefficients in (20) when α = 0. Thus, through using the Hamiltonian constraint and Petrov type I condition, we recover the Brown-York stress tensor (18) dual to the bulk metric in (8) in the case of Einstein gravity. Recover the Rindler fluid in Einstein-Gauss-Bonnet gravity In this subsection, we will recover the Rindler fluid in Einstein-Gauss-Bonnet gravity. For the convenience of calculation and sinceH ≡ 0, we write the Hamiltonian constraint (50) as where H (α) and δH (H) can be found in (40) and (52), respectively. SinceP ab ≡ 0, the Petrov type I condition in (49) becomes where P (α) ab and δP (H) ab can be found in (35) and (45). On the other hand, from (56) and with the results in (73), one has We then assume the following decomposition of the extrinsic curvature From (46), we then conclude Putting (77) into (52) and (35), one has As the Gauss-Bonnet corrections to Hamiltonian constraint and Petrov type I condition appear at the second order in the derivative expansion, we only need to consider the second order corrections that δ̺ (α) ∼ δπ . Thus put (78) into (40) and (35), we have Taking into account of (80) and (81) and consider the first order in the small α expansion, we obtain With (82) and (85), at the second order in the derivative expansion, the Hamiltonian constraint leads to And the Petrov type I condition leads to We can see that there is no constraint on ̺ (α) at this order, and it will be determined by the gauge choice of the stress tensor. Then from (55), we obtain On the other hand, a straightforward calculation from (55) and (77) gives where Π (1) ab has been obtained in (69). Put them together, we obtain The isotropic gauge of the pressure leads to δ̺ (2) = αÔ K cd K cd − 6p −1 Ω cd Ω cd . Then the stress tensor from Petrov type I condition turns out to beT ab + δT ab with (74) and (90), which match exactly with the T (GB) ab in (18) from the fluid/gravity calculation. The non-relativistic hydrodynamic expansion The Rindler fluid with Gauss-Bonnet corrections in the following non-relativistic hydrodynamic expansion has been studied in [43,44] v And the dual tress tensor turns out to beT ab =T ab + δT ab , whereT ab come from the Einstein sector, which are given by [43], Here the fluid shear σ ij = ∂ (i v j) and vorticity ω ij = ∂ [i v j] . And δT ab come from the Gauss-Bonnet term, with the non-vanishing components [44,47], We can see that the contributions from the Gauss-Bonnet term only appear at order ǫ 4 . This comes from the fact that the first non-zero components of the Riemann tensor appear at order ǫ 2 [44]. And notice that the situation for the case of Einstein gravity has been studied in [56]. Thus we need only to focus on the Gauss-Bonnet corrections to the Petrov type I condition and Hamiltonian constraint at ǫ 4 in this section. Petrov type I condition in Rindler fluid Introduce the new coordinate x 0 = √ r c τ , the flat induced metric γ ab in (5) becomes The (p + 2) Newman-Penrose-like vector fields are given with respect to the ingoing and outgoing pair of null vectors as [49] Here n is the unit normal vector of the hypersurface Σ c , ∂ 0 and ∂ i are the tangent vectors to Σ c . The spacetime is at least Petrov type I if With the Guass-Codazzi equations given in (34), we have the Petrov type I condition up to the first order in the small α expansion as with The Hamiltonian constraint becomes with δH (H) ≡ −4αH µν n µ n ν = α −4M abM ab +M abcdM abcd . Notice that the frame choice in (96) singles out a preferred time coordinate ∂ 0 and thus breaks Lorentz invariance. It has been shown in [56] that with the frame (96), the Petrov type I condition for vacuum Einstein gravityP ij = 0 is violated at order ǫ 4 : However, after straightforward calculations with the stress tensor (92) and (93), we find Thus, there are no Gauss-Bonnet corrections to the Hamiltonian constraint (102) and Petrov type I condition (98) up to order ǫ 4 and up to the first order in small α. In the following subsection, we will show that either demandP ij = 0 or with the stress tensor (92) of Rindler fluid in vacuum Einstein gravity, and impose we can get exactly the contribution (93) of the Gauss-Bonnet term to the stress tensor of the dual fluid, without solving the Einstein-Gauss-Bonnet field equations. Recover the Gauss-Bonnet corrections If we still demand the Petrov type I conditionP ij = 0 in the vacuum Einstein gravity, it has been shown in [56] that the stress tensor in (92) can be recovered up to an additional term at ǫ 4 : Then usingT ab + δT (E) ab instead ofT ab in (92), we can obtain the extrinsic curvatureK ab from (56), and then put them into (104) and (101), which lead to the same results in (106) and (107), we see that With (7), (56) and (92), we have the non-zero components ofJ ab as Substituting them into (108), we finally obtain After choosing the isotropic gauge such that there are no corrections to the δ ij part of the stress tensor at this order as in [43,44], we have δT = −4αr These results exactly match with the Gauss-Bonnet corrections in the stress tensor of Rindler fluid which are given in (93) and (94) from the fluid/gravity calculation. Alternatively, once the stress tensorT ab of Rindler fluid is given in vacuum Einstein gravity (92) from the fluid/gravity calculation, by demanding the condition (108) to hold that the additional Gauss-Bonnet corrections to the Hamiltonian constraint and Petrov type I condition vanish, one can show that the formulas between (110) and (116) are the same as the those in the case by usingT ab + δT (E) ab , such that we can again obtain the Gauss-Bonnet corrections to the stress tensor of the Rindler fluid in (93) for the Einstein-Gauss-Bonnet gravity. Conclusion To summarize, we have checked the Petrov type I condition for the vacuum solutions of Einstein-Gauss-Bonnet gravity in both relativistic and non-relativistic hydrodynamic expansions. With the solution constructed in [47], we have shown that the spacetime is at least Petrov type I up to the second order in the relativistic hydrodynamic expansion. Turn the logic around, assuming the Hamiltonian constraint and Petrov type I condition on a finite cutoff hypersurface, we have shown that the dual stress tensor can be recovered with correct first and second order transport coefficients by taking the Gauss-Bonnet coefficient as an expansion parameter. While in the non-relativistic hydrodynamic expansion [44], although the Petrov type I condition is violated at order ǫ 4 in the vacuum Einstein gravity [56], we have found that the Gauss-Bonnet term does not contribute to the violation terms in the Petrov type I condition up to ǫ 4 . Thus, given the stress tensor of the Rindler fluid in vacuum Einstein gravity, we have shown that demanding the additional Gauss-Bonnet corrections to the Petrov type I condition and Hamiltonian constraint vanish at the first order of α expansion, the Gauss-Bonnet corrections to the stress tensor of dual fluid can also be recovered. Notice that in both cases, in order to recover the stress tensor of dual fluid from the Petrov type I condition, we have additionally taken the small α limit. And up to the first order of α expansion, the Petrov type I condition can be expressed as a function of extrinsic curvature and other intrinsic quantities on the hypersurface. Actually, note the fact that the Einstein-Gauss-Bonnet field equations are quasi-linear in terms of α [61,60], and the dual stress tensor with Gauss-Bonnet corrections in (18) is also linear in terms of α. It is not surprised that we can still recover the stress tensor (18) even when we take the small α limit. So far most of studies on the Petrov type I condition has been focused on the case with asymptotically flat spacetimes. It is quite important and interesting to investigate corresponding ones for asymptotically AdS spacetimes based on the AdS/CFT correspondence, as the regularity condition on the future horizon of spacetime is necessary and important for the perturbations in the fluid/gravity correspondence and imposing the Petrov type I condition on the spacetime is mathematically much simpler than directly solving the perturbative gravitational field equations in order to find the stress tensor of dual fluid. On the other hand, the KSS bound [4] states that the universal value of the ratio of shear viscosity over entropy density from the AdS/CFT calculation is always above η/s = 1/4π, while in the AdS gravity with curvature squared corrections, the bound is found to be violated by the Gauss-Bonnet term [62,63,64]. With the static black brane solution in [65], it is expected that the universal value with Gauss-Bonnet correction η/s = [1−2(p+1)(p−2)α]/4π can also be recovered from the Petrov type I condition on the dual fluid. A Classification of the Weyl tensor In four dimensional spacetime, tensor classification plays an important role in studying the exact solutions of Einstein field equations [66]. And in particular, the Petrov type classification of Weyl tensor has interesting physical applications. It has been generalized to the arbitrarily higher dimensional spacetimes in [67]. In this appendix, we briefly summarize these results based on [68,69], which can also be reduced to the Petrov type classification in four dimensions. Consider a p + 2 dimensional Lorentz manifold (p ≥ 2) with signature (− + ...+) and choose a null frame ℓ, k, m i , which satisfies the following orthogonal and normalization conditions so that in this frame the metric of the manifold can be decomposed as The null frame is covariant under the following boost transformation, For a rank q tensor T on the manifold, its components T µ 1 ...µq with fixed list of indices are null frame scalars, and they transform under the boost transformation as b is named as the boost-weight of the null-frame scalar T µ 1 ...µq . The boost order (along ℓ) of the tensor T is defined to be the largest value of b {µ} among all the non-vanishing components T µ 1 ...µq . It is only a function of the null direction ℓ and is denoted as B(ℓ). The Weyl tensor can be decomposed and sorted by the boost weight of its components, C αβγδ = C [2] αβγδ + C [1] αβγδ + C where the superscript index indicates the boost weight and C [2] αβγδ = 4C (ℓ)i(ℓ)j k {α m i β k γ m j δ} , C [1] αβγδ = 8C (ℓ)(k)(ℓ)i k {α ℓ β k γ m i δ} + 4C (ℓ)ijk k {α m i β m j γ m k δ} , C )/2, as well as C (ℓ)i(k)j ≡ C (ℓ)i(ℓ)j ℓ µ m j α k ν m j β and so on, have been introduced. The Weyl tensor is generically of boost order B(ℓ) = 2, and a null vector ℓ is defined to be aligned with the Weyl tensor whenever B(ℓ) ≤ 1. In this case, ℓ is a Weyl aligned null direction, and 1 − B(ℓ) ∈ {0, 1, 2, 3} is the order of alignment. It usually depends on the rank and symmetry properties of the tensors. According to [67], the principal type of the Weyl tensor in a Lorentzian manifold is I, II, III, N according to whether there exists an aligned ℓ of alignment order 0, 1, 2, 3, respectively. If no aligned ℓ exists, the manifold is of (general) type G, if the Weyl tensor vanishes the manifold is of type O. The algebraically special types with necessary condition are summarized as follows: Type I : C (ℓ)i(ℓ)j = 0, Type II : C (ℓ)i(ℓ)j = C (ℓ)ijk = 0, Type III : C (ℓ)i(ℓ)j = C (ℓ)ijk = C ijkl = C (ℓ)(k)ij = 0, Type N : C (ℓ)i(ℓ)j = C (ℓ)ijk = C ijkl = C (ℓ)(k)ij = C (k)ijk = 0. (123) Following the curvature tensor symmetries and the trace-free condition [68], one can reach some familiar Petrov types with the following properties, Type I : C [2] αβγδ = 0, Type II : C Further classifications in more detail can be found in [68,69].
2014-08-27T18:33:21.000Z
2014-08-27T00:00:00.000
{ "year": 2014, "sha1": "5c24af7c7ad6fda953f1bc60e64615dbbdeb74e4", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP12(2014)147.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "5c24af7c7ad6fda953f1bc60e64615dbbdeb74e4", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
244704934
pes2o/s2orc
v3-fos-license
Research on Evaluation of Green Smart Building Based on Improved AHP-FCE Method With the accelerated pace of urbanization, green buildings and green smart buildings gradually come into people's vision and are highly valued by all sectors of society on the premise of meeting sustainable development strategy. Firstly, this paper selects 7 first-level index factors and 20 second-level index factors to establish the green smart building evaluation system. Secondly, this paper uses the analytic hierarchy process-fuzzy comprehensive evaluation (AHP-FCE) method to determine the weight of each secondary index. Finally, the feasibility of the evaluation system is verified by case analysis, and some suggestions on green smart building are put forward. Introduction A large number of buildings will be built in the construction of new urbanization. However, buildings are one of the largest energy consumers in the world. As people pay more and more attention to issues such as energy, environment, and sustainable development, the development of green smart architecture has become a new direction that conforms to the new urbanization construction. In the "Guiding Opinions on Accelerating the Establishment and Improvement of a Green Low-Carbon Circular Development Economic System" issued by China's State Council in February 2021, it is emphasized that green planning, green design, and green construction should be carried out in an all-round way; high-quality development and high-level protection should be promoted so as to ensure the realization of the goals of carbon peak and carbon neutrality [1]. In today's fast-developing construction industry, to achieve the goals of building energy conservation, environmental protection, and greenness and to provide humans with a safe, comfortable, and healthy production and living environment, the construction industry needs to shift from rapid development to high-quality development. Green smart building is a new-generation building incorporating BIM, GIS, Internet of ings, cloud computing, and other technologies. It saves resources and improves energy utilization while reducing environmental pollution and resource waste and has a great effect on alleviating the current energy shortage in my country. At present, there are relatively mature evaluation standards for the evaluation of green buildings, but there are few studies on the comprehensive evaluation of green smart buildings. Combining the smart building evaluation index factors, this paper tries to build a simple and clear green smart building evaluation system based on the green building evaluation system so as to enrich the new green building evaluation standards and promote the evaluation and development of green smart buildings. Research Status Arkin and Paciuk pointed out that intelligent buildings are increasingly using intelligent devices, materials, and sensors. Intelligent buildings should provide environments and means for the best use of buildings. ey studied some contemporary intelligent buildings based on the level of system integration [2]. Green buildings are buildings related to resource efficiency, life cycle effects, and building performance; smart buildings with integrated building technology systems as the core are buildings related to building and operational efficiency, as well as enhanced management and occupant functions. Sinopoli has studied the commonalities between the two [3]. Runde and Fay pointed out that building automation requires a large number of smart devices, and modern building automation systems are composed of as many as thousands of components with many attributes and dependencies [4]. Robichaud and Anantatmula's research shows that by adding a team of professionals to the project, they can promote the completion of green building projects better and faster [5]. Chen and Huang suggested the establishment of an environmental health information management platform to provide residential users with a comfortable and healthy indoor environment [6]. Balta-Ozkan et al. defined an intelligent building as a residence equipped with a communication network, linking sensors, household appliances, and devices that can be remotely monitored, accessed, or controlled to provide services that respond to the needs of its residents. ey studied the similarities and differences in the technical and economic driving factors and obstacles to the development of the smart home market in three European countries characterized by different policies and socioeconomic backgrounds [7]. Shaikh et al. conducted a comprehensive and important research on the most advanced intelligent control system for energy and comfort management of intelligent energy buildings [8]. Buckman et al. claimed in 2014 that intelligence can be used interchangeably with smart, and there is no obvious difference between the two [9]. Attoue et al. proposed the concept of smart buildings to use smart technology to reduce energy consumption and improve comfort and user satisfaction [10]. Research by To et al. found that building users tend to focus more on intelligent security systems, followed by intelligent and responsive fresh air supply, elevators, and escalators [11]. Ding and Fan pointed out that most green buildings certified by rating tools are mainly evaluated based on their design and construction. e life cycle of green buildings goes beyond these initial stages, and their full benefits become more apparent during the operation phase of the building [12]. Zhao et al. reviewed and analyzed 2,980 articles published from 2000 to 2016; the results show that green building research is concentrated in the fields of engineering, environmental science, ecology, and construction technology [13]. Apanaviciene et al. research and define the characteristics that smart buildings should meet in order to be compatible with the overall background of smart cities and introduce a new evaluation framework for smart buildings to integrate into smart cities [14]. Eini et al. proposed a real-time management system to control all aspects of smart buildings and proposed the system's performance specifications, design requirements, and operational constraints [15]. Long et al. [16] started from the concept of intelligent buildings and indoor ecological environment and introduced the use of passive methods such as energy-saving windows and building exterior sunshades and the use of active methods such as displacement ventilation and cold radiation ceilings to improve the indoor environment of smart buildings. After analyzing the concept and characteristics of green building and intelligent building as well as their development status at home and abroad, Yin et al. [17] put forward the harmonious and unified view of "human, building and nature" in order to achieve the purpose of saving energy and resources, harmless, pollution-free and recyclable, harmonious and sustainable development of society. rough a large number of investigations, combined with engineering construction practices, Duan [18] integrated a variety of green building evaluation systems to develop a green construction evaluation standard for construction projects. Wang and Zhou [19] studied in depth the green building evaluation system proposed by the American LEED company and the "Green Building Evaluation Standards" issued by China and combined the two standards for comparative analysis, then constructed a simple evaluation system using AHP method. Liu and Peng [20] based on the in-depth understanding of green building and real estate development, combined green building and real estate to build a green real estate development evaluation index system, adopted AHP-FCE method to establish a green real estate evaluation model, and combined with index weights put forward policy recommendations for realizing green real estate development. Xiong et al. [21] comparatively analyzed domestic and foreign green building evaluation systems, and on this basis, they built a green intelligent building evaluation system based on the 2014 version of green building evaluation standards, established a five-level evaluation standard, and determined the weights of evaluation indicators and a comprehensive evaluation model. Wang et al. [22,23] analyzed and studied the influence of EBI, FCS, and AIOT technologies on the building automation system of modern green intelligent buildings. e application of these technologies further enhanced and improved the control level, use functions, and service efficiency of green intelligent buildings. ese technologies lay the foundation for the real realization of the "green" and "intelligence" of buildings and create conditions for the further transformation of intelligent buildings into super-intelligent buildings and smart buildings. Based on academic research at home and abroad, scholars have continuously studied green buildings and intelligent buildings. e evaluation objects focused on green intelligent buildings mainly include "four savings and one environmental protection," intelligent equipment, technology, environment, materials, and management. ese evaluation systems have laid the foundation for the development of green smart buildings. Under the policy background of green economy and sustainable development, we have established a green smart building evaluation system, including safety and durability, health and comfort, convenience of life, resource conservation, environmental livability, smart, innovation and characteristic indicators and then used the analytic hierarchy process-fuzzy comprehensive evaluation (AHP-FCE) method to determine the weight of each secondary indicator and established a fivelevel evaluation standard. Modeling Steps of Improved Analytic Hierarchy Process-Fuzzy Comprehensive Evaluation (AHP-FCE) Method Establish a Set of Evaluation Indicators. We need to build a judging evaluation index system for the goal. Generally speaking, the fuzzy comprehensive discriminant model includes three levels of indicators, namely, the target level, the criterion level, and the plan level factor set. e evaluation object U is a collection of evaluation indicators, which is hierarchical. e first-level indicators can be established as (U i ), i � 1, 2, 3, · · · , n, so the index system is e secondary indicators can be established as (U ij ), so N i is the number of secondary indicators included in U i . Construct Fuzzy Relation Matrix where u ij � 0, the i factor U i is not as important as the j factor U j , 1,the i factor U i and the j factor U j are equally important, 2,the i factor U i is more important then the j factor U j , }. For the fuzzy relation matrix, also known as the membership matrix, it is necessary to establish not only the comment set, but also the membership set of grade factors. In this way, after quantitative analysis, the specific position of each factor that may affect the evaluation object in the grade can be determined so as to form the fuzzy relation matrix P: where p k ij � v ijk /M, where v ijk is the number of experts who believe that U ki is affiliated to V j among all experts. M is the total number of experts. Calculate Weight Using Improved AHP Method. AHP analytic hierarchy process is a multiobjective decision analysis method that combines qualitative and quantitative analysis methods. e improved analytic hierarchy process in this article is based on the traditional analytic hierarchy process, draws lessons from the methods in Ba's academic achievements [24], and makes changes in the strategy of constructing the judgment matrix. e previous nine-scale method is replaced by a more concise three-scale method, which makes it easier for experts to understand. Judging and scoring is more intuitive. e improved AHP method improves the accuracy of judgment, and the consistency check step can be omitted after using the optimal transfer matrix, which reduces the computational workload [25,26]. en, we solve the element h ij in the judgment matrix H: where Calculate M i , the product of each row element of matrix E constructed above, then calculate its nth root. e result is as follows: Normalize the vector W to get W i ′ : W i ′ � (w i / n i�1 w i ); finally, we can get the weight vector W of n elements: Green Smart Building Evaluation Index System Based on Improved AHP-FCE Method To build a more systematic and comprehensive evaluation system for green smart building projects, it is necessary to select the first and second indicators and the corresponding scoring rules, and the indicators should be relatively independent so as to avoid the appearance of redundant and miscellaneous indicators. At the same time, in order to facilitate the understanding of calculations and applications, Computational Intelligence and Neuroscience the construction of the index system should also be simple and easy to implement. Following the principles of systemicity, dynamics, and relative independence, combined with China's latest "Green Building Evaluation Standard" (GB/T50378-2019) and the group standard "Smart Building Evaluation Standard" issued by China Building Energy Conservation Association in 2021, we have built an evaluation index system for green smart buildings in Table 1. Empirical Analysis Xiang'an Zhengrong Mansion is located at the intersection of Shamei Road and Xiang'an South Road. It was built by XM Zhengpeng Real Estate Co., Ltd. e total construction area of the project is 114,307.13 square meters, covering an area of 27,595.52 square meters, the greening rate is 30%, and the plot ratio is 2.8. e planned properties include commercial streets, landscape gardens, and basketball courts. e project is surrounded by Xiangshan Park and Shamei Park. e environment is beautiful, and it is close to the subway entrance and exit, making travel very convenient. Building Evaluation System Based on Improved AHP-FCE Model. Next, we use the improved AHP-FCE method to comprehensively evaluate the green wisdom project level of Xiang'an Zhengrong Mansion in combination with the 7 primary index factors and 20 secondary index factors listed in Table 1. Construction of Judgment Matrix and Single-Layer Weight Calculation. According to the green smart building evaluation index system established in Table 1, the hierarchical structure is constructed by combining the interrelationships between the indicators. Experts from the green smart building and real estate industries are invited to compare and score each factor. A judgment matrix is constructed, and the corresponding weights are calculated. e results are as follows: Calculate according to the steps of the improved fuzzy comprehensive evaluation method, and get the weights of each criterion layer (first-level indicators): Using the same method and principle, construct the judgment matrix of the index layer (secondary indicators) against the criterion layer: Safety and durability indicators U 1 � (U 11 , U 12 ), Health and comfort indicators U 2 � (U 21 , U 22 , U 23 , U 24 ), Save resources indicators U 4 � (U 41 , U 42 , U 43 , U 44 ), Livable environment indicators U 5 � (U 51 , U 52 ), Innovation and characteristics index weight W U 7 � (0.5000, 0.5000). Calculation of the Composite Weight of Each Layer Element to the Target Layer. rough the above calculation and evaluation results, the weight of each indicator for comprehensive evaluation of green smart building project is obtained, as shown in Table 2. e weight distribution of indicators in Table 1 is shown in Figures 1 and 2. e main indicators that affect the evaluation of green smart buildings are save resources (U 4 , Equipped with public safety smart warning function Video surveillance with detection function Set up an emergency response system Computer room engineering and its own protective measures specification Effective display of video security monitoring system Fire and security have linkage function and work normally e security system has the ability to prevent damage e security system uses a dedicated transmission network Smart architecture and platform U62 Support the deployment of IoT application services e platform can centrally monitor and manage each subsystem e platform follows the principle of modular construction e platform supports secondary development Realize equipment life cycle monitoring and management With the docking function of smart building operation and maintenance platform e platform can intelligently analyze data Specific applications such as data sharing Smart operation U63 Has a smart parking management system Has a smart property management system Realize smart home with IoT technology With personnel positioning indoor navigation service Complete information query and release system Wireless network coverage on demand Access to smart platform for main electrical building equipment With office automation system Has a building energy metering management platform Set up an automatic remote metering system Set up air quality monitoring and release system Set up an online monitoring system for water quality and water supply and drainage Innovation and characteristics U7 Improvement and innovation U71 Further reduce the energy consumption of heating and air conditioning systems Architectural style design and inheritance of architectural culture Increase the green capacity of the site Reasonable selection of abandoned sites Structural system and building components meet requirements Apply BIM technology Reduce carbon emission intensity per unit area Green construction and management Use of insurance products for potential defects in construction project quality Computational Intelligence and Neuroscience 5 weight is 0.3451) and smart (U 6 , weight is 0.3451), followed by safety and durability (U 1 , weight is 0.1481). e main indicator that affects safety and durability (U 1 ) is safety (U 11 , weight is 0.7500); the main indicator that affects health and comfort (U 2 ) is indoor hot and humid environment (U 24 , weight is 0.5638); the main indicator that affects convenience of life (U 3 ) is service facilities (U 32 , weight is 0.6370); the main indicator that affects save resources (U 4 ) is energysaving and energy utilization (U 42 , weight is 0.5638); the main indicator that affects livable environment (U 5 ) is site ecology and landscape (U 51 , weight is 0.7500); the main indicator that affects smart (U 6 ) is smart operation (U 63 , weight is 0.6370); the main indicators that affect innovation and characteristics (U 7 ) are improvement and innovation (U 71 , weight is 0.5000) and characteristics (U 72 , weight is 0.5000). e overall ranking of indicator weights is shown in Figure 3. Among all the impact indicators, the most important is smart operation (U 63 ), followed by energy-saving and energy utilization (U 42 ), followed by safety (U 11 ), watersaving and water resources utilization (U 43 ), and smart architecture and platform (U 62 ). Determine the Set of Evaluation Criteria. e evaluation standard set of green and smart building projects selects the five-star evaluation system in the "Smart Building Evaluation Standards," which are one-star, two-star, three-star, four-star, and five-star. Use V to denote the set of evaluation criteria; then they are as follows: Computational Intelligence and Neuroscience Fuzzy Comprehensive Evaluation of Criterion Level. According to the actual situation of the project, this paper consulted a 10-member expert group composed of experts in the construction, environmental protection, and real estate industries by collecting relevant information and using questionnaire surveys and collected the expert group's review opinions on green and smart building projects. e fuzzy evaluation matrix is as follows: Safety and durability index matrix: Health and comfort index matrix: Convenience of life index matrix: Save resources index matrix: Livable environment index matrix: Innovation and characteristics index matrix: According to the steps of the improved AHP method, the calculated weight vector W of each evaluation index is established, the fuzzy evaluation matrix is established, and the comprehensive evaluation vector of the criterion layer (first-level indexes) is calculated according to the formula Y � W × P. Comprehensive evaluation vector of safety and durability index: (20) Comprehensive evaluation vector of health and comfort index: Comprehensive evaluation vector of convenience of life index: Comprehensive evaluation vector of save resources index: Comprehensive evaluation vector of livable environment index: Comprehensive evaluation vector of smart index: Comprehensive evaluation vector of innovation and characteristics index: According to formula Y � W × P, the comprehensive evaluation vector of the target layer is � (0.2064, 0.5200, 0.2112, 0.0487, 0.0137). (28) According to the principle of the maximum degree of membership, the comprehensive evaluation level of the green smart building project can be determined. e maximum comprehensive evaluation value of the green intelligent building project in this case is 0.5200, which belongs to the two − star level of the set of evaluation criteria. en, we use the formula S � Y × G T to calculate the comprehensive evaluation value of the green smart building project and obtain the quantified comprehensive evaluation result, where the value of the quantified evaluation standard set G is the median value of the corresponding value in the evaluation standard set V. So, the quantified comprehensive score S is (29) Analysis of Evaluation Results. rough the above calculations, it is shown that the project developed by XM Zhengpeng Real Estate Co., Ltd., is a two-star building. According to the quantified comprehensive evaluation calculation result, the comprehensive score of the overall evaluation of the project is corresponding to the two-star level. If you score according to the judging rules rules in Table 1, you can get consistent results. However, the judging rules' scoring method needs to determine the weight or value of the rules, which also increases the workload of the experts for scoring. e improved AHP-FCE method can reduce the corresponding workload and improve work efficiency. Conclusions and Recommendations From the analysis of the evaluation results, it can be seen that smart and green building sustainability have become the core of modern green buildings. e main indicators that have an impact on the development of green smart buildings include safety and durability indicators, health and comfort indicators, convenience of life indicators, save resources indicators, livable environment indicators, smart indicators, and innovation and characteristics indicators. Under the premise of these seven indicators, a fuzzy comprehensive evaluation model for green smart building projects was established, and this evaluation system was verified through corresponding cases, which further enriched the green smart building evaluation system. In order to promote the implementation of my country's green and smart building strategy and improve the level of green economy development, the following points should be given priority: (1) Firstly, we should focus on save resources. Green smart buildings are the inevitable trend of future development. Scientific management and advanced green and clean environmental protection technologies should be used in their development so as to improve energy efficiency, reduce building energy consumption, and improve people's quality of life. erefore, local governments should vigorously support the development of green buildings, further increase research on the development of green building products, and promulgate relevant policies for support and subsidies in order to accelerate the upgrading of the green Computational Intelligence and Neuroscience and smart building industry. (2) Secondly, in terms of smart, it is necessary to make full use of the Internet of ings, 5G, big data, cloud computing, artificial intelligence, and other technologies to create an economical, safe, reliable, efficient, convenient, and green ecological living environment through automatic sensing, ubiquitous connection, timely transmission, and information integration. While strengthening the utilization of green and smart building resources, qualified enterprises should be encouraged to explore and innovate more advanced management systems and smart management. (3) In terms of safety and durability, attention should be paid to the safety and durability of buildings to avoid "fragile buildings." Starting from the full life cycle of the building, improve the seismic performance of the building and the durability of structural components, and ensure the safety of people's lives. Green smart buildings are developing rapidly. We should constantly learn from experience and adjust the direction in the course of its development so as to explore an optimal development path. In the context of carbon peaks and carbon neutrality, leading companies in green smart buildings should adhere to the green, environmentally friendly, and healthy production concepts and strive to explore zero-carbon buildings to provide a "green model" for the development of the industry. Data Availability e data used to support the findings of this study are included within the article. Conflicts of Interest e authors declare that they have no conflicts of interest regarding the publication of this work.
2021-11-28T16:19:02.407Z
2021-11-26T00:00:00.000
{ "year": 2021, "sha1": "fca9b29e707932a8b4e644d7cdf6d7d64af65db7", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/cin/2021/5485671.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b4b7890b059aa1486607c292c49257228499538d", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
139439088
pes2o/s2orc
v3-fos-license
Fiber Supercapacitors Based on Carbon Nanotube-PANI Composites Flexible and wearable electronic devices are of a high academic and industrial interest. In order to power these devices, there is a need for compatible energy storage units that can exhibit similar mechanical flexibility. Fiber-based devices have thus become increasingly popular since their light-weight, and flexible structure can be easily integrated into textiles. Supercapacitors have garnered a lot of attention due to their excellent cycling durability, fast charge times and superior power density. The primary challenge, however, with electric double layer capacitors (EDLCs), which are part of the supercapacitor family, is that their energy densities are significantly lower compared to those of batteries. Pseudocapacitors, on the other hand, can be designed and created with large energy densities and other outstanding properties typical for supercapacitors. This chapter discusses the fabrication and testing of supercapacitors based on carbon nanotube-polyaniline (PANI) composite fibers. These flexible and light-weight devices are assembled using different electrolytes for comparison. The created in this work PANI-CNT composite devices attain an energy density of 6.16 Wh/kg at a power density of 630 W/kg and retained a capacitance of 88% over 1000 charge-discharge cycles. Introduction Over the last few decades there has been an increase in research into energy storage devices arising from the ever increasing energy demands for applications such as portable electronics and electric transportation [1]. Among portable electronics, wearable electronic devices have created a niche for themselves. Fiber-based devices have become increasingly popular since their thread-type structures can be easily integrated into fabrics and other structures. Supercapacitors are electrochemical energy storage devices that combine the high-energy-storage capability of conventional batteries with the high-powerdelivery capability of traditional capacitors [2,3]. Though they have lower stored energies than batteries, they deliver the stored energy in seconds. Supercapacitors Science, Technology and Advanced Application of Supercapacitors operate for extended periods of time, often millions of cycles without losing their energy storage capacities giving them an edge over batteries in how long they can be used [4][5][6]. Supercapacitors have two main classifications that are based on their charge storage mechanism and the type of electrode materials [4]. The first one, electric double layer capacitor (EDLC), stores charges electrostatically on the electrodeelectrolyte interfaces of the high surface area carbon materials. This process involves physical adsorption of ions at the electrode and electrolyte interface [2]. The second one, pseudocapacitor, on the other hand, stores charges within the electrodes in response to fast surface and near-surface redox reactions [5,7]. The electrodes are derived from transition metal oxides and conducting polymers. Due to these redox reactions, pseudocapacitors have been reported with energy densities far higher than that of EDLCs. With the emergence of flexible electronics such as foldable displays [8], soft photo-detectors [9] and bendable field effect transistors [10], flexible supercapacitors have become more popular than ever. They have been found to be suitable in powering portable and flexible electronic devices, and several have been fabricated with lightweight, flexible and possessing high power and energy densities [11][12][13][14]. Planar format supercapacitors have been found to have larger volumes and structural limitations which impede their use in lighter, smaller and omnidirectional flexible electronic devices [15,16]. To solve these problems, lightweight and high energy density fiber-shaped supercapacitors have been explored and fabricated [17][18][19][20][21][22]. Fiber electrodes for supercapacitors have been made from active materials with nanostructures, such as CNTs [23][24][25], graphene [17,26,27] and metal oxides [28][29][30]. However, the most widely studied ones have been CNT fiber electrodes and their composites. This is attributed to CNT's inherent flexibility, high surface area and high electrical conductivity [31]. In their fiber formats, they are highly aligned and have excellent mechanical durability while maintaining all their aforementioned properties. Polyaniline (PANI) is probably the most widely studied of the conductive polymers because of its high electronic conductivity, redox and ion exchange properties, excellent environmental stability and ease of preparation [32][33][34]. It has, therefore, been extensively explored in energy storage devices fabricated with pseudocapacitive electrodes. Bulk PANI, however, due to its low accessible surface area is not ideal for energy storage device electrodes. The workaround to this has been to fabricate nanostructured PANI materials. These structures have typically been made using a carbon template thereby producing materials with a large area to volume ratio and shorter ion diffusion paths [35][36][37]. In this chapter, we report our high energy density fiber supercapacitors based on CNT-PANI fiber composites. A chemical oxidation polymerization technique is employed to deposit PANI on the surface of the CNT fibers. This composite material gives superior performance as supercapacitor electrodes due to the fast redox reactions between the PANI and the electrolytes used. To create our CNT fibers, we employ a technique that involves dry spinning of multi-walled carbon nanotube (MWCNT) fibers from vertically aligned MWCNT arrays grown by chemical vapor deposition (CVD) as described in a previous publication by our research group [38]. This technique is used to spin continuous fiber at industrial rates from MWCNT arrays of 3 cm width and 4.25 cm length, resulting in fibers with diameters of approximately 55 μm and up to 40 m in length. Next, these fibers underwent atmospheric pressure oxygen plasma functionalization to create oxygen plasma functionalized CNT (OPFCNT) fibers as the base structure for the PANI deposition. CNT fibers are dry spun from vertically aligned CNT arrays. In our work, thin films of Fe and Co were sputtered on a silicon wafer, overlaid with approximately 5 nm Al 2 O 3 as a buffer layer, by means of physical vapor deposition (PVD). The created structure serves as a catalyst for the growth of aligned CNT arrays on the silicon wafer. This surface-treated substrate was then diced up into required pieces and then exposed to a CVD environment in a FirstNano ET3000 reactor. The resulting CNT array was drawable and spinnable and, by means of twisting and pulling. A homemade setup was used to fabricate highly aligned fibers [38], as shown in Figure 1. These pristine CNT fibers, when used to form EDLCs, produce quite low energy densities necessitating the deposition of PANI on them to increase the energy density. Oxygen plasma functionalization of fibers After the fibers were spun, they underwent an atmospheric pressure oxygen plasma functionalization process to improve the wettability of the fibers. This is necessary since carbon-based materials are naturally hydrophobic and need improved wettability to increase the deposition of PANI on the surface of the fiber during the oxidative polymerization process [39][40][41][42]. In previous publications [43,44], carbon-based materials were treated with acids to functionalize them and thereby improve the wettability before polymerization. These involve wet chemistry and as such mostly require multi-step reactions and involve strong chemicals, which affect the bulk properties of the CNT structures. The plasma functionalization process employed in this work is continuous, effective and can be used industrially for extensive lengths of fibers. Oxygen plasma functionalization was generated by systematically pulling the CNT fiber through a plasma head with a chamber for tubular structures (Surfx Atomflo 400 system). The set-up is shown below in Figure 2. The pristine CNT fiber was threaded through the plasma head and affixed to the collector bobbin with double-sided tape. The fiber was pulled through the plasma head at a speed of 0.206 cm/s using the collector bobbin on the motor. This processing led to the functionalization of the fiber with the following plasma parameters: 60 W power, 0.1 L/min oxygen and 15 L/min helium. These parameters were chosen since they ensured the fibers to be functionalized had minimum destruction, checked by Raman spectroscopy. At a pH of less than 2.5, the oxidative polymerization of aniline is a chain reaction [48]. The growth of the chains proceeds by the addition of the monomeric aniline molecules to the active chain ends. The chain growth is terminated after at least one of the reactants in the polymerization runs out. If there is an excess of the APS (oxidant), the resulting polymer remains in the pernigraniline form [49], especially at molar ratios of APS to aniline of over 1.5. If the rate of APS to aniline is equal to 1.25 [50] or aniline is in excess, pernigraniline is reduced to emeraldine at the end of the reaction while aniline is oxidized at the same time to emeraldine [48,51]. We, therefore, ensured in all our tests that we had excess aniline to promote emeraldine growth, the most thermally and environmentally stable form of PANI [52][53][54]. The oxygen plasma functionalized CNT (OPFCNT) fibers were cut into 7.5 cm portions and affixed to copper tapes with fast drying silver paint (TedPella Inc.). The copper tapes served as the leads used to connect the devices for electrochemical testing. These electrodes were then placed into 10 ml beakers and put into an ice bath. Aniline monomer dissolved in 1 mol/L HCl and the ammonium persulphate (APS) solution also dissolved in 1 mol/L HCl were then put in the various beakers with fibers at different ratios of aniline to APS. The amount of PANI formed on the fibers was controlled by the ratio of aniline to APS used as well as the time the solution was allowed to polymerize. The fibers were taken out after the polymerization time and rinsed in a beaker with deionized water to wash off the excess PANI. Electrode and device fabrication Fiber supercapacitors were created using poly (vinyl alcohol) and sulfuric acid (PVA-H 2 SO 4 ), as well as polyvinylidene fluoride-co-hexafluoropropylene and 1-ethyl-3-ethylimidazolium (PVDF-EMIMBF4) gel electrolytes. The PVA-H 2 SO 4 was made with 10 ml DI water, 2 ml H 2 SO 4 and 1 g PVA. The PVDF-EMIMBF4 gel electrolyte was prepared with 15 ml acetone, 1.5 g PVDF, and 3 ml EMIMBF4. The PVA-H 2 SO 4 was operated at a 1 V window, while the PVDF-EMIMBF4 was operated at a 3.2 V window. The larger voltage window the PVDF-EMIMBF4 allowed enabled Fiber Supercapacitors Based on Carbon Nanotube-PANI Composites DOI: http://dx.doi.org /10.5772/intechopen.80487 us to reach larger energy densities. Devices were made from these fibers by coating them with the gel electrolyte (PVA-H 2 SO 4 or PVDF-EMIMBF4). The fibers were then placed parallel to each other on a weighing sheet, with more electrolyte and sealed with Kapton tape. Electrode and device characterization Electrochemical measurements were carried out with an electrochemical workstation (Gamry, Interface 1000) at room temperature. The electrochemical characteristics of the electrodes and devices were evaluated by cyclic voltammetry at various scan rates, galvanostatic charge-discharge tests, and electrochemical impedance spectroscopy measurements from 10 6 to 10 −1 Hz using sinusoidal voltage amplitude of 10 mV at the open circuit potential. In a three-electrode configuration test, Ag/AgCl was used as the reference electrode, platinum served as the counter electrode and the experiments were run in 1 M Na 2 SO 4 . The capacitance (C) of the electrodes and fiber supercapacitors was calculated from the galvanostatic discharge curves at different current densities by using the equation: C=IΔt/ΔV. The gravimetric capacitance (C m ) and areal capacitance (C A ) were calculated by the following formula: Cm = C/m and CA = C/A, respectively. The gravimetric energy density (E m ) and power density (P m ) were calculated by the expressions: Em = 1/2(Cm(ΔV)2)/3. 6 and Pm = 3600Em/t. P m = 3600 E m ___ t The areal energy density (E A ) and power density (P A ) were calculated by the expressions: EA = 1/2(CA(ΔV)2)/3.6 E and PA = 3600EA/t, P V = 3600 E V ___ t where I is the discharge current, t is the discharge time, ΔV is the operating voltage window, m and A refer to the mass and volume of the device, respectively [40,55]. Scanning electron microscopy (SEM) (FEI XL30, 5 kV) and Raman spectroscopy (Renishaw inVia, 514 nm Ar-ion laser with a laser spot of ~1μm 2 ) were used to characterize the CNT-PANI. The masses of the fibers were taken on a Sartorius SE2 ultra-microbalance. X-ray photoelectron spectroscopy (XPS) data were obtained using a VG Thermo-Scientific MultiLab 3000 ultra-high vacuum surface analysis system, with ~10 −9 Torr base pressure using an Al Kα source of 1486.6 eV excitation energy. The high-resolution scans for carbon and low-resolution survey scans were taken for each sample on at least two different locations. Results and discussion The plasma functionalization of the fiber was confirmed by Raman data Figure 3a and XPS data Figure 3b-d. From the Raman spectra in Figure 3a, we observe an increment in the ratio of intensities between the D and G peak, from 0.776 to 1.195, signifying the destruction of the carbon sp 2 bonds during plasma functionalization. In Figure 3b, there is a documented increase in the atomic weight percent of oxygen from 9.1% in the pristine state to 28.17% for the oxygen plasma functionalized thread. Figure 3c and d is deconvoluted high-resolution C1s and O1s peaks from the XPS data, showing the various oxygen functional groups found on the surface of the fiber which is in close agreement with data reported in the literature [39,56]. PANI-CNT composite fibers were created from four ratios of aniline to APS (1:1, 2:1, 5:1 and 10:1). The OPFCNT fibers were placed with the chemicals as they polymerized for an hour. From our electrochemical half-cell tests, we observed that a 2:1 aniline to APS ratio gave the best specific capacitance, as seen in Figure 4a. Further testing of OPFCNT fibers with varying durations (10 minutes to 6 hours) of polymerization revealed that the composite fibers that underwent polymerization for an hour had the best electrochemistry data, as seen in the inset of Figure 4a and in Figure 4b. We observed that the polymerization of PANI increased with a higher concentration of APS as well as duration of polymerization. A 1:1 ratio therefore produced more PANI than a 2:1 ratio in the same time frame. PANI in the right amounts improves capacitance of the fibers, however when it becomes deposited in agglomerate morphologies, it leads to the inefficient usage of PANI and reduced capacitance [35][36][37]46] . Thus, in the same manner, if polymerization is allowed to take place for longer time these agglomerate morphologies will form and subtract from the synergistic effects of the PANI-CNT composite. The structures of the PANI-CNT fibers were observed by SEM. The morphologies and amount of PANI formed were found to correlate strongly to the duration of the polymerization. At 10 minutes, a thin film of PANI forms across the surface of the fiber and as the duration of polymerization increases, PANI nanorods begin to develop in dendritic structures on the fiber. Figure 5 shows SEM images of the fiber as it progresses from its pristine state to 6 hours of oxidation polymerization. For ease of referencing, we have labeled the fibers by the number of minutes they were polymerized (minutes-PANI-CNT). Figure 6 compares pristine CNT, 10-PANI-CNT and 360-PANI-CNT at higher magnifications to reveal the PANI structures being formed. Figure 6a shows the pristine fiber which has no PANI on it. In Figure 6b we find the onset of the formation of PANI as thin films in the fiber. The agglomerate morphologies of PANI are observed in Figure 6c. This shows the increment of PANI morphologies on the surface of the fibers with increasing time for polymerization. From the Raman data presented in Figure 7, we observe the gradual increment in PANI formation on the composite fibers as the duration of polymerization increases. The spectra for pristine CNT and pure PANI are also incorporated, so the gradual transformation from one extreme to the other can be seen. We observe as the duration of polymerization increases the spectra becomes less like CNT and more like PANI. Devices were created with PANI-CNT fibers, pristine CNT fibers, and OPFCNT fibers. Asymmetrical supercapacitors were also fabricated combining a PANI-CNT fiber and an OPFCNT fiber. The energy density of the PANI-CNT fiber supercapacitor was 3.77 Wh/kg at 0.5 A/g and a power density of about 188 W/kg when using PVA-H 2 SO 4 . These parameters were dramatically increased to 6.16 Wh/kg and 630 W/kg when using EMIMBF4 corresponding to an almost 64% increment in energy density and 235% increment in power density. Figure 8 presents a Ragone plot to give a more holistic view of the data as well as a comparison to other previously reported in the literature fiber supercapacitor devices. For ease of comparison, this plot was presented with the areal capacitance, as most fiber supercapacitor data is published with respect to the surface area of the electrodes [57]. The best devices (superior energy density and power density) from this Ragone plot were observed in our asymmetric devices. The latter was attributed to the combined redox reactions between the PANI and oxygen functional groups on the surface of the fibers, as well as to the synergistic effect of the pseudocapacitance (PANI-CNT) and EDLC (OPFCNT). Oxygen functional groups have been reported in other works to have improved capacitance of carbon-based materials [58][59][60][61] and this also plays a role in the enhanced electrochemical properties of the asymmetrical device. Figure 9 shows cyclic voltammetry graphs of all the devices at 200 and at 5 mV/s. It can be clearly seen from these graphs that the devices had the characteristic curves The stability of a supercapacitor is an important parameter since its practical application can be evaluated from this data. Figure 10 shows the cycling stability of the PANI-CNT (EMIMBF4) device over 1000 cycles. The device retains 88% of its capacitance even after 1000 charge-discharge cycles. This shows good stability and long lifetime of devices. Conclusion In this chapter, we have discussed the increased attention being given to fiber supercapacitors and their relevance to wearable electronics. We also revealed the role of carbon nanostructured fiber as energy storage devices and the challenges they face. We have successfully synthesized CNT fibers by CVD and dry spinning, applied a post-processing technique to these fibers (oxygen plasma functionalization) and by means of oxidation polymerization doped these fibers with PANI. These fibers were characterized electrochemically, by Raman spectroscopy and with SEM. These fibers were then used as electrodes to create simple fiber devices. The obtained devices produced energy densities of up to 6.16 Wh/ kg and 630 W/kg when using EMIMBF4 as electrolytes corresponding to almost a 64% increment in energy density and 335% increment in power density from devices fabricated with PVA-H 2 SO 4 (3.77 Wh/kg, 188 W/kg). These devices also maintained excellent capacitance retention (88%) over 1000 charge-discharge cycles. When a comparison was however made with other devices with respect to areal energy density and power density it was observed that the asymmetrical device comprising of an OPFCNT and PANI-CNT showed the best data. This was attributed to the combined redox reactions of both the OPFCNT and PANI-CNT electrodes with the electrolyte. Acknowledgements This work was funded by NASA NNX13AF46A and the National Institute for Occupational Safety and Health through the Pilot Research Project Training Program of the University of Cincinnati Education and Research Center Grant # T42OH008432. One of the authors (P. K. A.) would like to thank the Department of Chemical and Environmental Engineering at UC for a partial financial support. © 2018 The Author(s). Licensee IntechOpen. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/ by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
2019-04-30T13:09:03.675Z
2018-11-05T00:00:00.000
{ "year": 2019, "sha1": "88852494e143ae9c35411b98ddd05f18b162bac9", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/63280", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "b9aef6154da2b0853b05d61d8bae62cf98e41276", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
116046471
pes2o/s2orc
v3-fos-license
Towards Gotthard-II: Development of A Silicon Microstrip Detector for the European X-ray Free-Electron Laser Gotthard-II is a 1-D microstrip detector specifically developed for the European X-ray Free-Electron Laser. It will not only be used in energy dispersive experiments but also as a beam diagnostic tool with additional logic to generate veto signals for the other 2-D detectors. Gotthard-II makes use of a silicon microstrip sensor with a pitch of either 50 {\mu}m or 25 {\mu}m and with 1280 or 2560 channels wire-bonded to adaptive gain switching readout chips. Built-in analog-to-digital converters and digital memories will be implemented in the readout chip for a continuous conversion and storage of frames for all bunches in the bunch train. The performance of analogue front-end prototypes of Gotthard has been investigated in this work. The results in terms of noise, conversionngain, dynamic range, obtained by means of infrared laser and X-rays, will be shown. In particular, the effects of the strip-to-strip coupling are studied in detail and it is found that the reduction of the coupling effects is one of the key factors for the development of the analogue front-end of Gotthard-II. Introduction The European X-ray Free-Electron Laser (XFEL.EU) [1,2] has been constructed in the Hamburg/Schenefeld region and available for user experiments since the second half of 2017. It delivers extrashort, high intense X-ray pulses with a peak brilliance ∼ 8 orders of magnitude higher than any other synchrotron radiation source. The duration of each X-ray pulse is less than 100 fs. The pulses are operated in bunch trains, each consisting of 2700 X-ray pulses with a separation of 220 ns. The bunch trains are repeated with 10 Hz. The unique X-ray beam and its time structure pose the following challenges to detectors used at the XFEL.EU: A dynamic range of 0, 1, ..., 10 4 × 12.4 keV photons, a frame rate of 4.5 MHz, and last but not least radiation hardness up to 1 GGy for 3 years of operation. There are several detector development projects currently running for the XFEL.EU. AGIPD [3][4][5][6], LPD [7,8] and DSSC [9] are the 2-D pixel detectors for experiments at the XFEL.EU. All pixel detectors are expected to be commissioned in 2017 and 2018. In addition to the 2-D pixel detector systems, Gotthard-II, a 1-D microstrip detector, is specifically developed for the XFEL.EU, based on Gotthard-I but with improved functionality [10]. The Gotthard-II development started in 2015 and detectors will be commissioned in mid 2018 [11]. The Gotthard-II detector will be employed in the von Hamos spectrometers and Johann spectrometer for energy-dispersive experiments at the Femtosecond X-ray Experiments (FXE) beamline. In addition, it will be used as spectrum analyzer by the beam diagnostic group, as well as by the FXE, SPB1 and MID2 beamlines. The potential scientific applications include, but are not limited to: X-ray emission/absorption spectroscopy, hard X-ray high resolution single-shot spectrometry (HiREX), energy dispersive experiments, beam diagnostics, as well as veto generation for the other detectors [11]. For more examples of potential scientific applications, refer to [12][13][14]. Requirements of Gotthard-II at the XFEL.EU The Gotthard-II detector has less readout channels but similar complexity compared to the other 2-D detectors for experiments at the XFEL.EU. In addition, it is the only detector capable of measuring all the bunches in a train. To perform proper scientific experiments, Gotthard-II needs to achieve a frame rate of 4.5 MHz to match the particular bunch structure, a dynamic range up to 10 4 12.4 keV photons and single photon resolution3. A detailed specification can be found in table 1. Gotthard-II is equipped with on-chip Analog-to-Digital Converters (ADCs) and Static Random-Access Memories (SRAM, digital memory) capable of storing 2700 images for all X-ray pulses in a bunch train: The analogue signals, after passing through a charge sensitive pre-amplifier and a 1SPB: Single Particles, Clusters and Bio-molecules 2MID: Materials Imaging and Dynamics 3Radiation damage in Gotthard-II is not a problem, since the ASICs can be properly shielded and the silicon sensor will see considerable less dose compared to 2-D detectors whose focal plane faces to the XFEL beam. Parameter Correlated-Double Sampling (CDS) stage, are digitized by the ADCs immediately and the digital values are stored in the SRAM. All the 2700 images are read out during the bunch train spacing of 99.4 ms. This approach has several advantages over the use of analogue memories to store signals from the CDS output, as implemented in e.g. AGIPD. The immediate digitization of the signals removes the problem connected with the droop of charge in analogue memories and the consequent need to cool the detector to a very low temperature in order to reduce such effects [15][16][17]. It moreover removes the complexity related to the analogue readout and off-chip digitization, which require great care and corresponding resources, to avoid signal degradation. The analogue memories would in addition be very large in size and suffer from an on-chip cross talk problem [18]. Another important function of Gotthard-II is the generation of veto signals for 2-D detectors depending on the interaction between an XFEL pulse and investigated sample. Since the 2-D detectors have limited memories and are not able to record all images from the 2700 pulses per bunch train, with the veto signals generated by Gotthard-II, useless images of 2-D detectors can be discarded and the corresponding memories re-used. For this purpose, additional logic circuitry used to generate veto signals will be implemented into the final ASIC. This circuitry will provide a one-bit hit information per channel, not stored in the SRAM but read out immediately at a rate of 4.5 MHz4. This information will then be used by the FPGA on the readout board to generate the veto signal. Development strategies The main building blocks of the Gotthard-II ASIC, namely the analogue front-end electronics including pre-amplifier and CDS, the ADC and the SRAM have been designed and implemented separately in Multi-Project Wafer (MPW) runs. Therefore, each block can have its performance assessed independently and is integrated in the full-size ASIC only in case of proven full functionality. A first prototype version of a complete channel made out of blocks not yet rated as "final grade", has already been sent for production. This will provide information about the functionality and the interactions of all the building blocks when interconnected to form a channel within the multi-channel prototype. The Gotthard-I and Jungfrau [19] readout ASICs have been used as a basis for the development of the Gotthard-II analogue front-end. The paper will focus on the performance of the existing front-end prototypes of Gotthard fabricated in UMC-110 nm technology while the ADC and the SRAM will be discussed in a separate paper. The architecture of the analogue front-end prototypes The architecture of the analogue front-end prototypes (version Gotthard-1.4 & -1.5) is shown in figure 1. It includes four main parts: 1) a dynamic gain switching pre-amplifier, 2) a CDS stage, 3) analogue and digital memory cells, and 4) a readout chain for all strip channels. The pre-amplifier is a charge-sensitive pre-amplifier with dynamic gain switching functionality, similar to AGIPD [4] and Jungfrau [19]. Its output is connected to a comparator and a dynamic gain switching logic. There are four different feedback capacitors implemented in the pre-amplifier: C f,HG0 , C f,G0 , C f,G1 and C f,G2 . Initially, either C f,HG0 or C f,HG0 + C f,G0 can be selected as feedback capacitance. During charge integration, if the output voltage moves above the threshold of the comparator, V th,com , the dynamic gain switching logic will force the gain switching and the capacitor C f,G1 will be added to the feedback loop of the pre-amplifier. This will cause a reduction of the pre-amplifier gain and, as a side effect, a charge redistribution and a consequent reduction of the output voltage of the pre-amplifier. If the output voltage of the pre-amplifer is still above V th,com , a second gain switching occurs by adding another feedback capacitor, C f,G2 , to the feedback circuit. C f,G1 and C f,G2 can be pre-charged during the pre-amplifier reset phase. In this way, the output voltage range of the pre-amplifier after gain switching can be maximized, thus a larger dynamic range can be achieved. For the convenience, in the following we will note the gain using C f,HG0 as HG0, with C f,G0 , C f,G1 and C f,G2 in addition as G0, G1 and G2, respectively. The CDS stage is connected to the output of the dynamic gain switching pre-amplifier. It is used to remove the low frequency noise and the pre-amplifier reset noise. The amplification factor of the CDS is 2.35 (also called "CDS gain"). If gain switching happens, the correlation of the initial sample, stored in the CDS circuitry, and the actual signal is lost, so that CDS is not beneficial any longer. For this reason, the CDS stage is bypassed after gain switching. The signal is written into the analogue memory cells through a resistor of 125 kΩ which, together with the capacitive load at the CDS output, is used as an additional low-pass filter for noise reduction. The analogue signals from the CDS output are stored in analogue memory cells, while the information indicating the gain is stored, for each channel, in a 2-bit digital memory. During read-out, analogue and digital storage cells are driven by analogue and digital buffers separately. The analogue signals are selected by a multiplexer (MUX) and converted to fully differential signals through an off-chip driver and finally digitized by 14-bit ADCs on the readout board; The digital signals are sampled by a Field Programmable Gate Array (FPGA) on the readout board directly. The investigated Gotthard-1.4 & -1.5 prototype ASICs are wire-bonded to 320 µm thick silicon micro-strip sensors with 128 strips of 50 µm pitch and 8 mm length for testing. The only difference between Gotthard-1.4 and -1.5 ASICs is the size of the transistors used in the pre-amplifier, which is supposed to influence the speed and the noise of the pre-amplifier. Since the speed of writing charge into the analogue memory cell is limited by the serial resistor in the circuit, the difference in the speed of the pre-amplifier between the two prototypes cannot be measured. Thus, only results from Gotthard-1.5 will be shown and discussed in Section 3. The performance of the prototypes The performance of the front-end prototypes in terms of conversion gain, noise, dynamic range and strip-to-strip coupling has been investigated experimentally. All measurements were performed at room temperature and the prototype assemblies were cooled by a fan. The sensor was biased at 240 V and the power supply voltage of the ASICs was 1.4 V. Conversion gain In the following discussion, the conversion gain refers to the gain for HG0 and G0 and is expressed in ADU/keV. The conversion gain is determined using X-ray fluorescence of copper (Cu), which was placed as the target of an X-ray tube. The characteristic energy of the main k α line of the fluorescence, E k α , is 8.05 keV. In the measurement, an integration time of 10 µs was used and 100k frames were collected. Figure 2(a) shows the histogram of the measured ADU values using HG0 for a specific channel as an example. The identified peaks in the figure refer to 0, 1, 2 and 3 photons. Good separation between different photon peaks can be seen, indicating good noise performance. The peak positions were extracted from a Gaussian fit to each individual peak. Figure 2(b) shows the extracted peak position in terms of ADU as function of energy, which is given by the energy of the k α X-ray fluorescence times number of photons. The slope of a linear fit gives the conversion gain. It should be noted that the intensity of the k β -line at 8.90 keV of the cooper foil is much lower than the k α -line and cannot be resolved in the distribution due to the influence of noise as well as the charge diffusion, thus has been neglected in our case. Figure 2(c) shows the conversion gains of all channels for HG0 and G0, and figure 2(d) the histogram of the gain distributions. The conversion gains for HG0 and G0 are centered at 35.2 ± 0.8 ADU/keV and 22.8 ± 0.4 ADU/keV with ∼ 2% channel to channel variations, which shows very good uniformity over all channels of the ASIC. By means of the ratio of the conversion gain between HG0 and G0, the parasitic capacitance in the feedback loop of the pre-amplifier can be estimated to be 40.5 fF. Noise The noise measurement was performed in a light-tight box by measuring the integrated leakage current of the sensor for 10 µs multiple times. The histogram of ADU values was then fitted by a Gaussian function and the standard deviation, σ, was extracted. The noise is obtained using: where 3.6[eV] is the mean energy needed to generate one electron-hole pair in silicon by ionizing radiation, and gain[ADU/keV] is the previously measured conversion gain. The noises for HG0 and G0 are 158 ± 5 e − and 208 ± 4 e − for an integration time of 10 µs, as shown in figure 3. In addition, the noise has been investigated for different integration times, from 20 µs down to 50 ns. Figure 4 shows the extracted noise as function of integration time for all strip channels: The conversion gain obtained from the X-ray fluorescence measurement with an integration time of 10 µs was applied to the extracted σ at different integration times using formula 3.1. A reduction of σ below 500-600 ns has been observed which is due to the RC time constant in the circuit: The 125 kΩ resistor, that in Jungfrau helps to remove the high frequency noise, limits the writing speed. When writing charge from the CDS output into analogue memories, at least 500-600 ns are needed for the signal to settle. Dynamic range The dynamic range was measured using a pulsed infrared laser with a wavelength of 1030 nm. The integration time was set to 5 µs and the pulse duration was less than one nano second. The dynamic range scan was done by varying the laser intensity. The laser intensities were calibrated by measuring the photo-current from a 320 µm thick planar silicon diode and then converting to equivalent number of 12.4 keV photons per pulse. A detailed introduction to the experimental setup and conversion from laser intensity to photons can be found in [18,20]. Figure 5(a) shows the dynamic range for a specific channel using HG0 and G0. It can be seen the dynamic range is up to 1.26 × 10 4 12.4 keV photons at the end of the scan. The ratio of gains, obtained from the ratio of the slopes of the linear fits to the measurement points in different gains, has been summarized in table 2. The increase of the ratio at higher intensites in all gains in figure 5(b) comes from the shot-to-shot fluctuation in the laser intensity, which results in a convolution of the laser fluctuations and the electronic noise of the ASIC. The fluctuations are well below the Poisson limit over the entire dynamic range. Considering 5σ as a good separation to resolve single photons, in HG0 and G0 single photon resolution can be achieved for X-ray photons with an energy above 3.7 keV; in G1 and G2, it is possible to resolve 3 and 55 photons of 12.4 keV, respectively. Strip-to-strip coupling Due to the capacitive coupling between strip channels, even if all charge carriers generated by X-ray photons (no charge sharing effect) are collected by a strip, the neighbouring readout channels of the strip still measure a signal (also known as capacitive "charge division"). There are various models which describe this effect caused by the capacitive coupling between strip channels [21][22][23][24][25][26][27]. The charge measured by the strip channel collecting all carreries produced by X-ray photons, Q i , the charge measured by one of its first neighbouring channels, Q i+1 , by its second and third neighbouring channels, Q i+2 and Q i+3 , can be simplified as5 with the assumption that C 1st,2nd,3r d c << (A+1)·(C f +C par a ). A is the DC gain of the pre-amplifier, also known as open loop gain. C 1st c , C 2nd c and C 3r d c are the coupling capacitances between the strip channel collecting all carriers and its first, second and third neighbours. The coupling capacitance includes the contributions from the interstrip capacitance of the silicon sensor, the coupling capacitances between bonding wires, as well as between bonding pads. C f is the feedback capacitance of the pre-amplifier of strip-i, C par a the parasitic capacitance adding to the same feedback loop, Q tot the total charge, and C inp the total capacitance at the input node of strip-i. C inp is obtained by with C strip the bulk capacitance of an individual strip. If we only consider the capacitive coupling up to the third neighbouring channel, formula 3.6 can be written as: Thus, the coupling factor k f actor , defined by the ratio of charge collected by the neighbouring channel and the channel collecting the majority of the charge, is given by: 5Channel definition can be referred to figure 12 Since C strip << 2 · (C 1st c + C 2nd c + C 3r d c ), the charge lost to the strip capacitance coupled to the backside is negligible [28,29]. In this case, the fractional charge measured by each strip channel can be calculated by after the coupling factors have been determined. The strip-to-strip coupling will be discussed in two cases: 1) Coupling before gain switching (all channels are in the same gain), and 2) coupling right after dynamic gain switching (channels not in the same gain). Coupling before gain switching To determine the coupling factor before gain switching, low-rate X-ray measurements (only 0 or 1 photon collected by each strip per frame) were performed. This can be done either by reducing the current of the X-ray tube or by decreasing the integration time. Since the fractional charge in the neighbouring channels is of the same order as the noise charge, the determination has to be based on a large statistic with enough photon entries. Figure 6 shows the relation between the energy measured by strip-i and its first, second and third neighbouring channels on one side (noted as strip-(i +1), strip-(i +2) and strip-(i +3)) in HG06. The raw measurement in ADU values have been converted to energy based on the conversion gain determined from the X-ray fluorescence measurement. The region with maximal occurance appearing at (0,0) in the figure refers to the 0 photon peak, and the other two regions to the single photons of 8.05 keV in strip-i and strip-(i + 1) (or strip-(i + 2), strip-(i + 3)). Taking the single photon region of strip-i as the region of interest (ROI), and projecting it to the two axes, as seen in figure 7(a) and (b), the energy distributions for strip-i, strip-(i + 1), strip-(i + 2) and strip-(i + 3) for the same entry of X-ray photons onto strip-i are obtained. The mean energy/charge measured by each strip channel is obtained from Gaussian fits to each individual distribution and thus the coupling factors determined according to formula 3.8, 3.9, 3.10. The determined coupling factors, k 1st f actor , k 2nd f actor and k 3r d f actor , as shown in figure 8, are 6.2%, 2.3% and 1.0% in HG0, and 4.2%, 1.3% and 0.5% in G0. Using formula 3.11, the fractional charges have been calculated and shown in table 3. Since the fraction of charge collected by neighbouring strips is not negligible (16.0% in HG0 and 10.7% in G0), this effect has to be taken into account in the detector calibration. Thus, the 6Results in G0 have also been obtained but are not shown here. conversion gain obtained in Section 3.1 should be corrected by dividing by a factor of 84.0% for HG0 and 89.3% for G0, and the noise in Section 3.2 multiplying a factor of 84.0% for HG0 and 89.3% for G0, respectively. For a comprehensive understanding, the coupling factor has also been calculated theoretically. In the calculation, the DC gain of the pre-amplifer and the coupling capacitance have to be known. Since the DC gain of the pre-amplifier cannot be measured directly, it has been obtained from simulations using Cadence [30]. and C 3r d c , is mainly attributed to: Interstrip capacitance of the silicon sensor, coupling capacitance between bonding wires as well as between bonding pads of the readout channels. The interstrip capacitance is obtained from TCAD simulations [31]. Figure 10 shows the simulated region of the strip sensor and the interstrip capacitance, C 1st int , C 2nd int and C 3r d int , as function of bias voltage. The values at the operation voltage of 240 V are 287.1 fF, 65.2 fF and 25.8 fF, respectively. The simulated results agree with analytical calculations [32,33]. The coupling capacitance between bonding wires, C 1st c,wir e , C 2nd c,wir e , and C 3r d c,wir e , are 70.4, 46.9 and 39.2 fF based on a theoretical calculation for pairs of parallel wires7; the coupling capacitance between bonding pads of strip-i and strip-(i + 1) is found to be 35.4 fF, while the capacitance between strip-i and the others is negligible8. Taking all the contributions into account, the coupling capacitance, C 1st c , C 2nd c and C 3r d c , are approximately 105.8 fF, 46.9 fF and 39.2 fF, respectively. Giving the fact that the DC gain of the pre-amplifier, the feedback capacitance, and its parasitic, as well as coupling capacitance have been obtained from previous determination, the coupling factor k 1st f actor , k 2nd f actor and k 3r d f actor are 6.0%, 1.7% and 1.0% in HG0, and 3.7%, 1.1% and 0.6% in G0 based on theoretical calculations using formula 3.11. Table 1 shows the comparison of the coupling factors obtained from measurements and theoretical calculations, and the differences are within ∼ 30%. The difference can be attributed to: 1) the simple assumption C 1st,2nd,3r d c << (A + 1) · (C f + C par a ), which neglects the charge division in-between the other channels without X-ray photons incoming, (2) the over-estimation of the DC gain of the pre-amplifier which depends on the input and output voltage of the pre-amplifier and might be different in the measurement, (3) mismatch of the feedback capacitance of the pre-amplifier in ASICs fabrication, and (4) the rough estimation of coupling capacitance between bonding wires, under the assumptions that the coupling between different pair of wires is independent and the wires are parallel and equal distance from one to another. 7The coupling capacitance between bonding wires is given by C c,wir e = πǫ 0 l/ln d/(2a) + d 2 /(4a 2 ) − 1 [34]. ǫ 0 is the permittivity of free space; d is the distance between two parallel bonding wires with a radius of a and a length of l. In the calculation, a = 12.5 µm, l = 3.5 mm, and d = 50, 100 and 150 µm were used, assuming independent coupling between each pair of wires. Thus, C 1st,2nd,3r d c,wir e = 70.4 fF, 46.9 fF and 39.2 fF. 8The value is derived by measuring the coupling factor k 1st f act or between strip-i and strip-(i + 1) after removing the bonding wires of strip-(i + 1). Coupling right after dynamic gain switching When dynamic gain switching happens in one strip channel, the charge stored on C f,G1 during the pre-charge phase (and C f,G2 if the second gain switching occurs) will be re-distributed to all capacitors in the feedback loop of the pre-amplifier and to the neighbouring channels as well due to the capacitive coupling. In this case, the charge division into neighbouring channels is also reduced due to the equivalent capacitance of the pre-amplifer of the switched strip channel increases from (A + 1) · (C f,HG0 + C par a ) to (A + 1) · (C f,HG0 + C f,G1 + C par a ), according to formula 3.3, 3.4, 3.5. This causes: (a) an abrupt change of the charge in the neighbouring channels without gain switching; (b) a delay of gain switching of the neighbouring channels. The first phenomena (a) is observed experimentally using fore-mentioned infrared laser inject- Figure 11. Strip-to-strip coupling at the gain switching point using HG0. Infrared laser was injected into the center of strip-i. Measured ADU of strip-i, strip-(i + 1), strip-(i + 2) and strip-(i + 3) as function of number of 12.4 keV photons. strip-i switches at ∼ 25 photons of 12.4 keV and its output decreases after gain switching due to the CDS stage has been by-passed. ing into the center of a strip (strip-i) using HG0. Figure 11 shows the output of strip-i and its first, second and third neighbouring strip channels. The x-axis refers to the number of photons measured by strip-i. Due to the diffusion of carriers in the silicon sensor and the size of the laser beam, a fraction of the charge generated by the laser diffuses into strip-(i + 1) and thus its outputs are higher than 6.2% of the output of strip-i expected from pure capacitive coupling. When gain switching of strip-i occurs (at ∼ 25 photons as seen in figure 11), a reduction in ADU is clearly visible in the neighbouring strip channels. The step corresponds to 3.3 × 12.4 keV photons for strip-(i + 1), 1.1 photons for strip-(i + 2) and 0.5 photons for strip-(i + 3). Considering up to the third neighbouring strip channels, the total change is ∼ 9.8 × 12.4 keV photons. The measurement results are explained by a SPICE simulation, as shown in figure 12: It considers a network including 7 strips, each connecting to a pre-amplifier and a CDS stage. The strip channels are coupled through interstrip capacitance as well as the coupling capacitance due to bonding wires and pads. In addition, all strips are coupled to the sensor backplane, where a bias voltage is applied. In the simulation, current was injected at the input of the pre-amplifier of strip-i and the output of strip-i, strip-(i + 1), strip-(i + 2) and strip-(i + 3) was simulated as function of the current value. The injected current pulse was a triangle with a duration of 20 ns. The duration of the injected current pulse is longer than the pulse generated by photons in reality (usually a few ns if no "plasma effect" occurs [35][36][37]); however, this does not influence the results. The peak values of the injected current were ramped from 55 nA to 550 µA, corresponding to 1 to 10 4 × 12.4 keV photons. Figure 13 shows the simulation results from the output of the CDS stage as function of the number of 12.4 keV photons. For strip-i, the first gain switching occurs at ∼ 25 photons. The Figure 12. The SPICE model used for simulation including a RC network of different coupling sources, pre-amplifiers and CDS of seven strip channels. The injection current at the input node of the pre-amplifier of strip-i was a triangle shape of 10 ns for the rise and fall time. The injected current was ramped from 55 nA to 550 µA, corresponding to 1 to 10 4 × 12.4 keV photons respectively. Note that the CDS and gain switching electronics of each channel is not indicated in the figure. switching point from simulation results is consistent with the measurements. Before the gain switching of strip-i, the change of the CDS output for strip-(i + 1), strip-(i + 2) and strip-(i + 3) increases proportional to the output of strip-i with the ratios given by the coupling factors; after the gain switching of strip-i, the output node of strip-i, which is equal to the output voltage of the pre-amplifier as the CDS is by-passed, is brought to a voltage close to the pre-charge voltage. It should be noted that the output voltage of strip-i after the gain switching point is lower than the pre-charge voltage. This is mainly due to the fact that the charge pre-stored on C f,G1 (and C f,G2 ) re-distributes to C f,HG0 and the neighbouring channels due to capacitive coupling and thus reduces the output of the switched channel. The release of the pre-stored charge into the circuit is equivalent to writing a negative charge into the input node of the pre-amplifier of strip-i, and thus there is a negative charge division with the neighbouring strip channels, together with the increase of the capacitive load in the feedback loop of the pre-amplifier of strip-i, which results in a reduction of the CDS output for strip-(i + 1), strip-(i + 2) and strip-(i + 3). The SPICE simulation qualitatively explains the measured observation. The second phenomena (b) is observed through a measurement with laser injection into the middle of the gap between two strips. Figure 14 shows the results when injecting the laser into the middle of strip-(i − 1) and strip-i. The results for strip-(i + 1), strip-(i + 2), strip-(i + 3) are also indicated in the figure. Due to the threshold dispersion of the channels, the gain switching point differs channel by channel. In this measurement, strip-i and strip-(i − 1) receive the same amount of charge but switch at different number of photons: ∼ 16 photons for strip-i and ∼ 23 Figure 13. Simulation of the dynamic range scan showing the cross-talk after gain switching due to capacitive coupling. CDS output of strip-i, strip-(i + 1), strip-(i + 2) and strip-(i + 3) are shown. photons for strip-(i + 1). Immediately after the gain switching of strip-i, the cross-talk reduces the signal in strip-(i − 1) by ∼ 3 × 12.4 keV photons, thus causing a further delay in gain switching of strip-(i − 1). It has been noticed that after strip-i switched, the gain switching of strip-(i − 1) requires 3 photons more to switch gain, as shown in the open triangles in figure 14. The reason is that the charge division to strip-i and strip-(i − 1) is not identical any more due to the increase of capacitive load from strip-i: After the gain switching of strip-i, its equivalent capacitance of the pre-amplifier increases from (A + 1) · (C f,HG0 + C par a ) to (A + 1) · (C f,HG0 + C f,G1 + C par a ). It means even with the same charge at the input of the pre-amplifier of strip-i and strip-(i − 1), less charge flows into strip-(i − 1) due to the non-equal charge division caused by different capacitive loads. Another evidence to support the explanation is that after the gain switching of strip-i, the slope of strip-(i − 1) decreases, which indicates less charge collected than expected even if the charge injected into the two strips is identical. Thus a careful calibration for each channel is necessary, which requires the knowledge of the gain status of the neighbouring channels. In addition, the cross-talk has also been investigated using different pre-charge voltages9, V r e f, pr echr . Figure 15(a) shows the measured ADU value for strip-i, strip-(i + 1), strip-(i + 2) and strip-(i + 3) as function of V r e f, pr echr , when the gain of strip-i just switches from HG0 to G1. The outputs of all channels linearly depend on V r e f, pr echr . The intersection point between strip-(i + 1) and strip-(i + 2) of ∼ 580 mV is found to be the voltage when the reset switch of the pre-amplifier is just released. Above this voltage, the measured ADU values of strip-(i + 1), strip-(i + 2) and strip-(i + 3) are below their nominal pedestal values. Figure 15(b) is the measured cross-talk in terms of the number of 12.4 keV photons at different pre-charge voltages. With increasing the pre-charge voltage, a larger negative cross-talk is observed, due to the fact that more negative charge is pre-stored in C f,G1 and C f,G2 . The measurements indicate two ways to reduce the cross-talk: 1) Reducing the voltage used to pre-charge C f,G1 and C f,G2 ; 2) moving the working point of the pre-amplifier to a higher voltage. Both ways reduce the negative charge pre-stored in the medium and low gain capacitors, but have unacceptable side consequences: The former reduces the dynamic range; the latter increases the power of the ASIC and reduces the DC gain, which in turn increases the coupling factor before gain switching as a further drawback. As a summary, the coupling effect can be calibrated easily before gain switching; however, after gain switching, due to the negative charge from the pre-charged feedback capacitors and the larger feedback capacitance, the charge redistributes in the readout network and a negative cross-talk is observed making detector calibration complex. Thus, reducing the coupling effect is a key task for the development of the Gotthard-II analogue front-end. Summary and discussion Gotthard-II is currently being under development for the XFEL.EU. It makes use of silicon strip sensor as sensing material and a dynamic gain switching ASIC to cope with the high dynamic range up to 10 4 × 12.4 keV photons still keeping single photon resolution. To avoid droop effects and to achieve a compact storage of images, ADCs will be implemented in the ASIC and the digitized values will be stored in SRAM for each of the 2700 X-ray pulses and then read out during the bunch train spacing of 99.4 ms. Additional logic for digital comparisons will be designed to provide veto signals for the other pixel detectors. This paper puts an emphasis on the characterization of the existing analogue front-end prototypes of Gotthard: The noise, conversion gain and dynamic range are measured. Most of the results meet the specifications. The writing speed is limited by a serial resistor in the circuit which can be simply removed in the next design with an expected increase of noise as a drawback. In addition, the coupling effects have been investigated in detail. It has been found that the charge division due to capacitive coupling is not negligible in the current design; however, a careful calibration on the coupling factors for each strip channels to their neighbours make the correction for conversion gain feasible. In addition, the coupling effects have been investigated around the dynamic gain switching point: cross-talk has been observed due to the redistribution of the charge on the medium and low gain capacitors. The total cross-talk can be as high as ∼ 9.8 × 12.4 keV photons at the switching point and depends on the voltage used to pre-charge the medium and low gain capacitors. This makes the calibration of each strip complex. Thus, reducing the coupling effect is a key in further development. Based on this study, it is known that by reducing the pre-charge voltage the cross-talk effect can be suppressed; however, this will reduce the dynamic range of the detector which is not acceptable. From the theory described in the text and the measurement results, it indicates a few ways to reduce the coupling effect: 1) Reduction of interstrip capacitance and other coupling capacitance, 2) increase of capacitance in the feedback loop of the pre-amplifier, and 3) increase of the DC gain of the pre-amplifier. For 1), the w/p (width of the strip implant divided by the pitch) for the current design is 11 µm/50 µm = 0.22. The width of the implantation of 11 µm is close to the design limit; in addition, a further reduction of implant width cannot gain a factor of two in the reduction of the coupling factor; the coupling capacitance due to bonding wires cannot be reduced too much as well since the length of the wires is limited by the guard ring region of the silicon sensor and by the reserved safe space between the ASIC and sensor edge with high voltage. For 2), an increase of the capacitance in the feedback loop of the pre-amplifier will result in an increase of noise; in particular, since the coupling factor is inversely proportional to the feedback capacitance, a small increase in the capacitance cannot improve the coupling effect too much whereas a significant increase can result in a loss of single photon resolution due to the increase of noise. For 3), by optimizing the design of the pre-amplifier it is possible to achieve a high DC gain, and thus have the coupling effect reduced significantly. Thus, the design and optimization of a pre-amplifier with higher DC gain will be an important task in the development of the Gotthard-II analogue front-end. A List of parameters used in SPICE simulation The SPICE simulation started in HG0 mode and the following parameters in The derived coupling factors k 1st f actor , k 2nd f actor and k 3r d f actor agree with the extraction from low-rate X-ray measurement quite well.
2017-11-20T21:30:52.000Z
2017-11-20T00:00:00.000
{ "year": 2018, "sha1": "71cb77e0a215f8cfa15f8c018643dc23478f33b4", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1711.07555", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "71cb77e0a215f8cfa15f8c018643dc23478f33b4", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
214718583
pes2o/s2orc
v3-fos-license
A Path Toward Precision Medicine for Neuroinflammatory Mechanisms in Alzheimer's Disease Neuroinflammation commences decades before Alzheimer's disease (AD) clinical onset and represents one of the earliest pathomechanistic alterations throughout the AD continuum. Large-scale genome-wide association studies point out several genetic variants—TREM2, CD33, PILRA, CR1, MS4A, CLU, ABCA7, EPHA1, and HLA-DRB5-HLA-DRB1—potentially linked to neuroinflammation. Most of these genes are involved in proinflammatory intracellular signaling, cytokines/interleukins/cell turnover, synaptic activity, lipid metabolism, and vesicle trafficking. Proteomic studies indicate that a plethora of interconnected aberrant molecular pathways, set off and perpetuated by TNF-α, TGF-β, IL-1β, and the receptor protein TREM2, are involved in neuroinflammation. Microglia and astrocytes are key cellular drivers and regulators of neuroinflammation. Under physiological conditions, they are important for neurotransmission and synaptic homeostasis. In AD, there is a turning point throughout its pathophysiological evolution where glial cells sustain an overexpressed inflammatory response that synergizes with amyloid-β and tau accumulation, and drives synaptotoxicity and neurodegeneration in a self-reinforcing manner. Despite a strong therapeutic rationale, previous clinical trials investigating compounds with anti-inflammatory properties, including non-steroidal anti-inflammatory drugs (NSAIDs), did not achieve primary efficacy endpoints. It is conceivable that study design issues, including the lack of diagnostic accuracy and biomarkers for target population identification and proof of mechanism, may partially explain the negative outcomes. However, a recent meta-analysis indicates a potential biological effect of NSAIDs. In this regard, candidate fluid biomarkers of neuroinflammation are under analytical/clinical validation, i.e., TREM2, IL-1β, MCP-1, IL-6, TNF-α receptor complexes, TGF-β, and YKL-40. PET radio-ligands are investigated to accomplish in vivo and longitudinal regional exploration of neuroinflammation. Biomarkers tracking different molecular pathways (body fluid matrixes) along with brain neuroinflammatory endophenotypes (neuroimaging markers), can untangle temporal–spatial dynamics between neuroinflammation and other AD pathophysiological mechanisms. Robust biomarker–drug codevelopment pipelines are expected to enrich large-scale clinical trials testing new-generation compounds active, directly or indirectly, on neuroinflammatory targets and displaying putative disease-modifying effects: novel NSAIDs, AL002 (anti-TREM2 antibody), anti-Aβ protofibrils (BAN2401), and AL003 (anti-CD33 antibody). As a next step, taking advantage of breakthrough and multimodal techniques coupled with a systems biology approach is the path to pursue for developing individualized therapeutic strategies targeting neuroinflammation under the framework of precision medicine. Neuroinflammation commences decades before Alzheimer's disease (AD) clinical onset and represents one of the earliest pathomechanistic alterations throughout the AD continuum. Large-scale genome-wide association studies point out several genetic variants-TREM2, CD33, PILRA, CR1, MS4A, CLU, ABCA7, EPHA1, and HLA-DRB5-HLA-DRB1-potentially linked to neuroinflammation. Most of these genes are involved in proinflammatory intracellular signaling, cytokines/interleukins/cell turnover, synaptic activity, lipid metabolism, and vesicle trafficking. Proteomic studies indicate that a plethora of interconnected aberrant molecular pathways, set off and perpetuated by TNF-α, TGF-β, IL-1β, and the receptor protein TREM2, are involved in neuroinflammation. Microglia and astrocytes are key cellular drivers and regulators of neuroinflammation. Under physiological conditions, they are important for neurotransmission and synaptic homeostasis. In AD, there is a turning point throughout its pathophysiological evolution where glial cells sustain an overexpressed inflammatory response that synergizes with amyloid-β and tau accumulation, and drives synaptotoxicity and neurodegeneration in a self-reinforcing manner. Despite a strong therapeutic rationale, previous clinical trials investigating compounds with anti-inflammatory properties, including non-steroidal anti-inflammatory drugs (NSAIDs), did not achieve primary efficacy endpoints. It is conceivable that study design issues, including the lack of diagnostic accuracy and biomarkers for target population identification and proof of mechanism, may partially explain the negative outcomes. However, a recent meta-analysis indicates a potential biological effect of NSAIDs. In this regard, candidate fluid biomarkers of neuroinflammation are under analytical/clinical validation, i.e., TREM2, IL-1β, MCP-1, IL-6, TNF-α receptor complexes, TGF-β, and YKL-40. PET radio-ligands are investigated to accomplish in vivo and longitudinal regional exploration of neuroinflammation. Biomarkers tracking different molecular pathways (body fluid matrixes) along with brain neuroinflammatory endophenotypes (neuroimaging markers), can untangle temporal-spatial dynamics between neuroinflammation and other AD pathophysiological mechanisms. Robust biomarker-drug codevelopment pipelines are expected to enrich large-scale clinical trials testing new-generation compounds active, directly or indirectly, on neuroinflammatory targets and displaying putative disease-modifying effects: novel NSAIDs, AL002 (anti-TREM2 antibody), anti-Aβ protofibrils (BAN2401), and AL003 (anti-CD33 antibody). As a next step, taking advantage of breakthrough and multimodal techniques coupled with a systems biology approach is the path to pursue for developing individualized therapeutic strategies targeting neuroinflammation under the framework of precision medicine. INTRODUCTION Alzheimer's disease (AD) is the most commoncause of neurodegenerative dementia. According to current estimates, 17% of people aged 75-84 years in the United States have AD, and the disease costs the country US$236 billion per year. The prevalence is projected to triple by 2050 to >15 million, with annual costs of >$700 billion (1). There is an urgent need for developing pharmacological treatments with a diseasemodifying effect to halt the disease at its earliest preclinical stage where brain and cognitive functions can still be preserved (2,3). Indeed, drugs currently available on the pharmaceutical market (i.e., acetylcholinesterase inhibitors and non-competitive N-methyl-D-aspartate antagonists) have been approved for a symptomatic effect only and for the dementia stage of AD (4). The acknowledged pathophysiological hallmarks-(I) extracellular deposition of amyloid beta (Aβ), (II) intracellular aggregates of tau proteins, ultimately called neurofibrillary tangles (NFT), and (III) neurodegeneration-have been integrated in research diagnostic criteria (5)(6)(7)(8). The hypothesis-free biomarker-guided "A/T/N" classification scheme was introduced to categorize subjects based on core AD hallmarks (9). The A/T/N scheme is anticipated to provide consistent recruitment of individuals and target engagement among various different sites in AD clinical trials. Even though the A/T/N classification scheme provides crucial pathophysiological insights, it offers a partial depiction of the spectrum of pathomechanistic modifications of AD (10,11). The pathophysiological mechanisms of multifactorial and polygenic AD are not limited to the neuronal tissue; they are related to cerebral immunological responses (12). Indeed, brains of patients with AD and other neurodegenerative diseases (ND) show chronic inflammation (13). Neuroinflammation is as an innate immunological response of the nervous system that comprises microglia, astrocytes, cytokines, and chemokines, which play a central role in an early phase of AD pathogenesis (12,14). The key contribution of inflammation in the AD pathophysiology has been hypothesized more than 20 years ago (12,(15)(16)(17). Recent studies demonstrate that this early diseaseaggravating central nervous system (CNS) inflammation starts decades before the appearance of severe cognitive decay or AD (18)(19)(20). Along this line, different longitudinal studies show that inflammation and microglial activation occur years before AD onset (21)(22)(23). Furthermore, there is a strong link between neuroinflammation and amyloid and tau accumulation in the human brain (23)(24)(25)(26). The acknowledged cell mediators of inflammatory mechanisms in AD are microglia and astrocytes (12). In general, these cells play a substantial role in neural transmission and synapse remodeling, as they facilitate the removal of nonessential synapses by eradicating inadequate connections (27,28). Thus, the efficiency of neuronal transmission is increased. The Role of Microglia and Astrocytes in Alzheimer's Disease Synaptic Dysfunction Synapses exhibit a quad-partite arrangement that consists of an axon terminal, a dendritic spine put in direct communication with a microglial and an astrocytic process (29). Astrocytes and microglia-the brain-resident macrophages-play a key role in neural circuit development and synaptic homeodynamics during adulthood. Astrocytes are essential for supporting synaptogenesis (axonal and dendritic spines sprouting) and regulating synaptic robustness (30)(31)(32). Astrocytes also contribute to the spatiotemporal integration of several synaptic signals and regulate the synaptic transmission (33,34). Microglial cells play a key role in the immune surveillance of the presynaptic microenvironment and also for the synaptic remodeling toward axonal and dendritic terminals pruning by reshaping proteolytic and phagocytic processes. Microglial cells are able to recruit astroglia, or they can be recruited by the latter (30)(31)(32)35). They are thought to drive the well-known age-related regional synaptic vulnerability, as recently reported (36). Indeed, an age-related ultrastructural and functional shift of microglia cells is associated with increased synaptic susceptibility and neurodegeneration (35). Therefore, astrocytes and microglia express physiological properties essential for synaptic transmission, the accurate modulation of neural and synaptic plasticity, and both synaptic adaptation and homeostasis (30)(31)(32). The Role of Microglia Microglial cells, arising from the mesodermal (myeloid) lineage (42), are the main category of macrophages in the CNS parenchyma. They express a large assortment of receptors that recognize exogenous or endogenous CNS insults and initiate an immune response. Besides their typical immune cell role, microglial cells protect the brain by stimulating phagocytic clearance and providing trophic sustenance to preserve cerebral homeostasis and support tissue repair. When circumstances related to loss of homeostasis or tissue alterations occur, then many dynamic microglial mechanisms are triggered, leading to the "activated state" of microglia (43). These encompass cellular morphology modifications, changes in the secretory profile of molecular mediators, and increased proliferative responses (44). A persistent homeodynamic imbalance, such as brain accumulation of Aβ, can trigger a step further in activation, referred to as "priming" (37). Priming of microglia is directed by alterations in their microenvironment and the release of molecules guiding their proliferation. Priming makes microglia inclined to secondary inflammatory stimulating factors, which can then elicit amplified inflammatory reactions (37). Activated microglia is a typical pathophysiological feature of AD and other ND (12,43,45). Two main types of microglia cells are present in the brain, "resting" (or "quiescent") and "active" microglia. In particular, there is evidence for the high degree of heterogeneity of microglial activation in the CNS, which can be categorized into two opposite activation phenotypes: M1 and M2 (43,46,47). According to the phenotype activated, microglia can generate either cytotoxic or neuroprotective effects (46). The M1 or "proinflammatory" phenotype (classically activated) displays proinflammatory cytokines and nitric oxide. It decreases the release of neurotrophic factors, thus exacerbating inflammation and cytotoxicity (43). In contrast, the M2 or "antiinflammatory" phenotype (alternatively activated) displays antiinflammatory cytokines, increased expression of neurotrophic factors, and several other signals involved in downregulation, protection, or repair processes in response to inflammation (43). Preliminary evidence from experimental studies suggests that the phenotypic transformation of the activated M1/M2 functional states ("phenotypic switching") (48,49) can be determined by both the stage and the severity of the disease. In preclinical models, M1 microglia seems to prevail at the injury site, at the end stage of disease, and once inflammation resolution and repair processes of M2 microglia are diminished (46). In light of the increasing evidence that the modality by which microglia is activated is a continuum between proinflammatory (M1) and anti-inflammatory (M2) phenotypes, the M1/M2 "dichotomy" (or "polarization" scheme) is still disputed. Actually, it seems possible that the global process of microglia activation represents a much larger heterogeneous spectrum of very dissimilar responses (43). Experimental models of AD demonstrate that microglia cluster around plaques, likely via chemotactic mechanisms, and may contribute both in Aβ (39,44) clearance and in limiting the growth and further accumulation plaques (39,44). Moreover, the dysregulation of microglia activity, including dystrophic microglia, may be either a trigger, or a worsening factor, or both, of the seeding of aberrant protein aggregates in the brain (39,44). In AD, during inflammation, there is a transition from the resting to the active functional state of microglia that, at a general level, might be the consequence of stress or depressive-like behavior (50). At a molecular level, inflammation is promoted by the presence of Aβ aggregates, including oligomers and fibrils (51)(52)(53)(54). Indeed, microglia can bind to soluble Aβ oligomers and insoluble Aβ fibrils through cell surface receptors, including the class A1 scavenger receptor (SCARA1), cell surface cluster of differentiation (CD) markers (CD36, CD14, and CD47), the α6β1 integrin, and the Tolllike receptors (TLRs) (55)(56)(57)(58). A key point within the scientific debate is represented by a recent evidence indicating that microglia displays either beneficial or harmful effects throughout the beginning and advancement of AD (45). This is strictly related to the nature of the major activities: (I) clearance of Aβ or (II) release of proinflammatory mediators. In early AD pathogenesis, Aβ oligomers and fibrils gather in the extracellular space and elicit a pathological cascade resulting in neuronal apoptosis and depletion. Microglia eliminate Aβ peptides and dying/dead cells through phagocytosis (59,60). Besides clearance of Aβ oligomers and fibrils, microglia surrounds plaques and fibrils likely creating a physical barrier that can prevent their spreading and toxicity (61). Aβ clearance is also stimulated by the release of numerous proteases participating in Aβ degradation (62). In spite of the advantageous actions of early activation of microglia cells, their chronic activation by Aβ is detrimental and induces protracted inflammation and disproportionate Aβ deposition, thus rushing neurodegeneration (Figure 1). During AD pathogenesis, the production and release of proinflammatory cytokines and other detrimental components are intensified. In addition, the typical phagocytic action of microglia is decreased. Moreover, the microglialdependent release of apoptosis-associated speck-like protein containing a caspase recruitment domain (ASC) modulates the diffusion of the pathology within and between cerebral areas (63). Extracellular vesicles-constituted by microvesicles and exosomes and released by reactive microglia-play a role in AD pathogenesis (64) (Figure 1). Finally, microglial cells are able to regulate AD pathogenesis via active interaction with neurons, astrocytes, and oligodendrocytes. Indeed, activated microglial cells induce altered astrocytes via proinflammatory cytokines (Figure 1). These astrocytes can rush and aggravate neuronal and oligodendrocytes death (65). The still open question is to understand the specific contributions of neuronal and glial cells in the early phase of inflammation in preclinical AD. Aβ 1−42 oligomers have a major role in synaptic depletion and gradual cognitive deterioration (66,67). They induce neuroinflammation and neurodegeneration by stimulating the microglia to FIGURE 1 | Multifaceted functions of microglia during Aβ pathology. In healthy brain and early stages of AD, microglia clear small aggregates of Aβ peptides by phagocytosis and by secreting proteolytic enzymes, such as IDE, neprilysin, and MMP9. During advanced AD, microglia exacerbate AD pathology by releasing proinflammatory cytokines that induce neuronal cell death as well as A1 astrocytes, which, in turn, affect neuronal survival. Moreover, during advanced AD, microglia-derived ASC specks and EVs promote seeding of Aβ aggregates. Aβ, amyloid beta; AD, Alzheimer's disease; ASC, apoptosis-associated speck-like protein containing a CARD; C1q, complement component 1q; EVs, extracellular vesicles; IDE, insulin degrading enzyme; IL-1β, interleukin-1 beta; MMP-9, metalloprotease-9; TNF-α, tumor necrosis factor-alpha. From Wang and Colonna (45). Copyright© 2019, Society for Leukocyte Biology. Reprinted with permission from Wiley. Frontiers in Immunology | www.frontiersin.org FIGURE 2 | Role of neuroinflammation in AD pathogenesis: impairment of neurotrophin signaling. Aβ 1−42 oligomers promote neuroinflammation and neuronal death in AD brain by eliciting the release of proinflammatory cytokines (IL-1β and TNF-α) from microglia and also interfering with the synthesis of anti-inflammatory cytokines such as TGF-β1. TNF-α inhibits microglia phagocytosis of Aβ and stimulates γ-secretase activity, thus facilitating Aβ accumulation and microglia-mediated neuroinflammation. Proinflammatory microglial activities promote neuronal death also through the formation of ROS and RNS. Neuroinflammatory phenomena can finally contribute to the pathogenesis of AD by impairing neurotrophin signaling function: (I) reducing the synthesis of BDNF and TGF-β1 and (II) causing an impairment of NGF metabolic pathway characterized by a reduced conversion of proNGF to biologically active mNGF and by an increased degradation of mNGF promoted by MMP-9. Aβ, amyloid beta; Aβ 1−42 , 42-amino acid-long amyloid beta peptide; BDNF, brain-derived neurotrophic factor; IL-1β, interleukin-1 beta; MMP-9, metalloprotease-9; NGF, nerve growth factor; mNGF, mature nerve growth factor; proNGF, precursor of the nerve growth factor; RNS, reactive nitrogen species; ROS, reactive oxygen species; TGF-β, transforming growth factor-beta; TNF-α, tumor necrosis factor-alpha. The Role of Astrocytes Astrocytes, differently from microglia and similarly to neurons and oligodendrocytes, arise from the neuroectoderm (74). These cells promotes synaptogenesis (axonal and dendritic spines sprouting), regulates the synaptic strength, take part in the spatial-temporal integration of multiple synaptic processes, and modulate the neurotransmission. Hence, astrocytes execute a variety of physiological activities, in both developing and adult brain, that are essential for synaptic plasticity and a solid and organized cognitive activity (74). Of note, astrocytes modulate Ca 2+ -dependent signaling pathways that are crucial for hippocampal synaptic function and plasticity (75,76). Indeed, depending on the fluctuations of intracellular Ca 2+ concentrations, they release gliotransmitters, such as glutamate, D-serine, and ATP, which have feedback actions on neurons (77). Moreover, each astrocyte wraps several neurons, thus interacting with hundreds of neuronal dendrites (78) and connecting with up to two million synapses in the human cortex (79). This kind of interconnectedness indicates that each astrocyte creates a hub to facilitate the integration of the information (74). Moreover, remodeling of astrocytes promotes neuroprotection and recovery of injured neural tissue (80,81). Along with microglia activation, hypertrophic reactive astrocytes gather around Aβ plaques as reported in human postmortem studies (82) as well as in animal models (83). Like microglia, astrocytes are also activated by tissue injury, infection, and inflammation (84). In AD, after exposure to Aβ, astrocytes release various proinflammatory molecules, such as cytokines, interleukins (ILs), complement components (85)(86)(87), nitric oxide, and other cytotoxic compounds, ultimately amplifying the neuroinflammatory response. Human neuropathological studies conducted on AD brains report the presence of cytoplasmic inclusions of non-fibrillar Aβ in astrocytes, supposed to reflect a phagocytic engulfment from extracellular Aβ deposits (86). In addition, rodent models of AD indicate the ability of astrocytes to uptake and clear Aβ in subjects bearing cerebral fibrillar aggregates and diffuse plaques (16,17,33,86). Conversely, the shutdown of astrocytemediated homeodynamics is associated with increased Aβ plaque burden and synaptic terminals dystrophy (68). This enhanced phagocytic activity may represent a compensatory mechanism to incipient increased Aβ accumulation to neutralize its induced toxicity. GENES MODULATING NEUROINFLAMMATION IN ALZHEIMER'S DISEASE Genome-wide association studies (GWAS) allowed the detection of more than 40 susceptibility gene variants associated with a bigger risk of developing late-onset AD (88). These results include genes associated with immune reaction (in particular, ABCA7, CD33, CLU, CR1, EPHA1, HLA-DRB5-HLA-DRB1, and MS4A). The relevance of neuroinflammation is further sustained by recent large-scale GWAS showing that the risk of developing late-onset AD is substantially more elevated in individuals with rare variants of microglial immunoreceptors: TREM2, encoding the triggering receptor expressed on myeloid cells 2 protein (89); CD33 (transmembrane receptor CD33), expressed on cells of myeloid lineage (90,91); and PILRA (paired immunoglobulinlike type 2 receptor alpha) (92). The receptor protein TREM2 enhances the rate of phagocytosis in microglia and macrophages, modulates inflammatory signaling, and controls myeloid cell number, proliferation, and survival (89). Recent studies show that triggering TREM2 receptor in microglial cells is closely associated with the pathogenesis of AD (93). TREM2 modulates microglial functions (e.g., stimulates the production of inflammatory cytokines) in response to Aβ plaques and tau tangles (94,95). TREM2 absence enhances amyloid pathology, during early AD; however, this is exacerbated at later stages due to the loss of phagocytic Aβ clearance (94). TREM2 variants cause AD by decreasing the Aβ phagocytic ability of microglia and through the dysregulation of the proinflammatory response of these immune cells (96). Interestingly, the analysis of the existing single-cell transcriptome datasets for human neurons highlights the association of microglia with late-onset AD (97). In addition, the study of regulatory networks of genes showing differential expression in AD brains indicates that immune-and microglia-specific gene modules primarily contribute to AD pathophysiology (98). Finally, Tanzi and colleagues, after exploring the potential role of the cross-talk between CD33 and TREM2 in both neuroinflammation and the cause of AD, propose that TREM2 is working downstream of CD33 to modulate the neuroinflammatory process (99). ROLE OF NEUROINFLAMMATION IN ADULT NEUROGENESIS AND ALZHEIMER'S DISEASE Besides the above-mentioned role of Aβ and tau in triggering neuroinflammation, it is assumed that the presence of extracellular tau plays a role in the transition from resting to active microglia. In the resting microglia, the protein fractalkine (CX3CL1), secreted by healthy neurons, binds to the cell receptor (CX3CR1) present in the microglia allowing the maintenance of microglia in the resting state. Tau pathology is shown to be associated with neuroinflammatory processes. On the other hand, microglia could be involved in tau propagation in tauopathies. In this scenario, microglial CX3CR1 acts like a receptor for extracellular tau, since the absence of CX3CR1 impairs the internalization of tau microglia (100). Thus, extracellular tau can compete with CX3CL1 for a common receptor. Microglia cells lacking CX3CR1 are deficient in neuronal CX3CL1 signaling and are not in the resting state. As a result, these active microglial cells could secrete some compounds, such as cytokines, potentially affecting neuronal functions like adult neurogenesis. The absence of the microglial CX3CR1 impairs the synaptic integration of adult born hippocampal granule neurons (101). Mice lacking CX3CR1 show modifications in both microglia and neurons of some cerebral areas, like dentate gyrus. Adult-newborn neurons, in CX3CR1-/-mice, show a deficient synaptic integration in the neuronal network and exhibit a diminished amount of dendritic spines. These display some morphological alterations, since mice lacking CX3CR1 protein have a hyperactive, anxiolytic-like, and depressive-like phenotype (101). Mainly, all the previous remarks are observed in mouse models, but little is known about the consequences of changes in microglia in humans. Interestingly, CX3CL1 concentrations are reduced in the cerebrospinal fluid (CSF) of AD patients compared to control subjects, thus suggesting that variations in CX3CL1 levels might represent a new target to use in inflammation and AD (102). Two recent different publications describe the consequences, in humans, of having homozygous mutations in the colony-stimulating factor 1 receptor (CSF-1R) gene expressing a cell receptor essential for the development and maintenance of microglia. The consequences are the presence of abnormalities not only in brain structures, like corpus callosum, but also in bones that, in some cases, are overly dense and malformed (103,104). In the future, it will be interesting to explore possible changes in adult neurogenesis at the dentate gyrus in the autopsy, in the cases of patients with biallelic CSF-1R mutations. CELLULAR AND MOLECULAR NEUROINFLAMMATORY PATHWAYS IN ALZHEIMER'S DISEASE Neuroinflammatory pathways and microglial cells activation are associated with neuronal ectopic cell cycle activation (105). In particular, microglial activation induced by Aβ oligomers promotes neuronal ectopic cell cycle events (CCEs) via the tumor necrosis factor-alpha (TNF-α) and the c-Jun kinase (JNK) signaling pathways. Hence, administering of non-steroidal antiinflammatory drugs (NSAIDs) in AD transgenic mice precludes both microglial activation and stimulation of CCE (105,106). Two analyses report the capability of ibuprofen to alter the advancement of mild AD (107). However, forthcoming AD clinical trials show no effectiveness in mild dementia individuals, probably because these drugs are administered in a late phase of CNS inflammation. Indeed, recent studies designate initial CNS inflammation as an encouraging target to prevent the advancement of the pathology (19). Today, it is widely accepted that oxidative stress is strongly associated with the inflammation observed in AD (108). In fact, neuroinflammatory processes can act both as cause and as effect of chronic oxidative stress (Figure 2). In this context, microglia play a pivotal role. Proinflammatory microglial activities may be detrimental in AD due to reactive oxygen and nitrogen intermediate species-ROS and RNS, respectively-leading to oxidative stress-induced neuronal death, which could be further exacerbated by chronic stress (109,110). Cumulative evidence suggests that microglial inflammation-induced oxidative stress in AD is amplified. In contrast, microglial-mediated clearance mechanisms are not functional (43,110). TNF-α exerts a key role in this early proinflammatory process observed in preclinical AD as emerges from preclinical studies in animal models of AD (111)(112)(113)(114) as well as from human longitudinal studies (21,(113)(114)(115). TNF-α is chronically released during the course of AD pathology, likely by activated microglia, neurons, and astrocytes stimulated by increased levels of extracellular Aβ (111). Aβ oligomeric forms activate microglia with anomalous TNF-α-mediated pathways in mouse models (68). Such an atypical stimulation of cerebral innate immunity is responsible for reduced serotonergic tonus, a primary event in depression due to Aβ, a prodromal symptom of AD (70). On the other hand, TNF-α can stimulate γ-secretase activity, which results in an increased synthesis of Aβ peptides and a further increase in TNF-α release (113,116). It is hypothesized that this auto-amplified loop in the AD brain can contribute to the maintenance of excessive levels of TNF-α, which could then stimulate Aβ synthesis and neuronal loss, also inhibiting microglia phagocytosis of Aβ (Figure 2) (113,117). Finally, TNF-α significantly contributes to promote insulin resistance and the following cognitive decline in AD (118,119). Aβ oligomeric forms prompt peripheral glucose intolerance in mice by activating TNF-α signaling in the hypothalamus (120). Multiple studies detected elevated TNF-α levels in both mild cognitive impairment (MCI) and AD (21,113). Interestingly, Down syndrome cases with preclinical AD show significant links among augmented levels of plasma TNF-α, Aβ accumulation, and the following cognitive deterioration in the subsequent years (115). TNF-α exerts its activity by binding two distinct highaffinity receptors (TNF-Rs) placed at the cell surface: TNF-RI, ubiquitously expressed apart from erythrocytes, and TNF-RII, whose expression is limited to myeloid cells, endothelial cells, oligodendrocytes, microglia, astrocytes, and subpopulations of neurons (113). The concentrations of the soluble forms of the TNF receptors (sTNF-RI and sTNF-RII) are typically unaltered in CSF and blood of AD patients compared to controls (21). However, both TNF-α and TNF-RI concentrations are increased in postmortem brains of early-stage AD patients (113). MCI subjects present controversial data; longitudinal studies report associations between TNF-R concentrations and the risk of conversion from MCI to AD (21). Notably, the TNF-α receptor complex and its functional proteins are assumed to play a crucial role since they link neuroinflammatory pathways to amyloid deposition process in a chronically damaging and selfperpetuating way (21). A strong neurobiological link is also found in the AD brain between the deficit of anti-inflammatory cytokines, such as TGF-β1, and the early proinflammatory process observed in preclinical AD (70). TGF-β1 is a neurotrophic factor whose deficit exerts a key role in AD. A selective impairment of TGF-β1 pathway is present in early AD, both in the AD brain (121,122) and in AD animal models (71,123,124). This deficit seems to critically contribute to neuroinflammation in AD brain. TGF-β1 displays both anti-inflammatory and neuroprotective actions (123,125) and stimulates Aβ clearance by microglia (126). Furthermore, it exhibits a primary role in synaptic plasticity and memory creation processes, thus supporting the path from early to late long-term potentiation (LTP) (127). We should reconsider the relevance of TGF-β1 in neuroinflammation resulting from microglia activation, contributing to reactivate the neuronal cell cycle in the AD brain (128). According to this scenario, the reactivation of the neuronal cell cycle might be assisted by the disruption of Smad-dependent TGF-β1 pathways. Overall, these studies suggest the potential contribution of the deficit of Smaddependent TGF-β1 pathway to neuroinflammation and cognitive impairment (70). Finally, neuroinflammation can exert a primary function in AD pathophysiology by interfering with nerve growth factor (NGF) maturation and function. NGF is a neurotrophic factor essential for the survival and homeostasis of basal forebrain cholinergic neurons whose selective degeneration critically contributes to cognitive decline in AD patients (132,133). Studies in transgenic animal models of AD indicate that the proinflammatory process-initiated before plaque deposition and promoted by soluble Aβ oligomers-leads to an impairment of NGF metabolic pathway characterized by a reduced conversion of the precursor proNGF to the mature NGF (mNGF) as well as by an increased deprivation of mNGF (18,132,134). Neuroinflammatory processes promote an overactivation of metalloprotease-9 (MMP-9), as observed in the brains of Down syndrome patients (132), MCI subjects, and AD patients (135). Increased MMP-9 activity would then facilitate the degradation of mNGF, finally compromising mNGF activity in sustaining the trophic dependence of the cholinergic neurons (132). Notably, a strong correlation is present in Down syndrome cases showing preclinical AD among the plasma TNF-α increase, a deficit in NGF maturation (with grown concentrations of proNGF), and an increased degree of cognitive impairment (115). This study substantiates the key contribution of inflammatory markers (i.e., TNF-α) in combination with plasma Aβ 1−42 levels and increased proNGF levels to better predict the worsening of "latent" AD pathology with the consequential cognitive decline in Down syndrome patients (115). The discovery of an imbalance in the metabolic pathway controlling NGF maturation and degradation in Down syndrome/AD patients provides a platform for the identification of novel biomarker candidates as well for the development of disease-modifying drugs. Therefore, drug discovery processes should be directed in the future to develop new drugs that are able to interfere with early CNS inflammation and, at the same time, rescue neurotrophin signaling (e.g., BDNF, NGF, TGF-β1) in the AD brain. TARGETING NEUROINFLAMMATION IN ALZHEIMER'S DISEASE: EVIDENCE FROM ANIMAL MODELS Among the different mediators of inflammation explored, TNFα mediates proinflammatory processes in various ND including AD (136). In normal conditions, TNF-α from glial cells modulates homeostatic activity-dependent regulation of synaptic connectivity (137). On the other hand, this cytokine mediates the disrupting effects of Aβ on LTP in experimental AD. Accordingly, mutant mice lacking TNF receptor type 1 exhibit normal LTP following Aβ application and similar results are obtained with the use of anti-TNF agents including the monoclonal antibody infliximab and thalidomide, which also inhibits TNFα production (138). Generally, several studies indicate that blocking the TNF-α pathway in AD models is associated with: (I) improvement in memory decline, as tested in different behavioral tests evaluating cognitive function; (II) reduction in immunohistochemical and histopathological markers like formation of Aβ plaques and NFT; and (III) reduction in the number of microglial cells in the AD brain (139). Similarly to TNF-α, also the proinflammatory cytokine IL-1β mediates the synaptotoxic effects of Aβ peptide (140). Indeed, the interleukin-1 receptor antagonist (IL-1Ra) is able to reverse synaptic plasticity alteration triggered by the administration of the 40-amino acid-long Aβ peptide (Aβ 1−40 ) (141). However, the role of ILs in AD pathogenesis is far more complex since some exert proinflammatory while others exert anti-inflammatory actions. In this frame, it is worth mentioning IL-12 and IL-23 which are increased in CSF in both AD and MCI (142,143). Notably, genetic ablation of IL-12 and IL-23 or therapeutic approaches directed against IL-12 and IL-23 signal reduce the AD-like pathology, including histopathological and behavioral changes, making them attractive targets for the treatment of AD (144). On the other hand, IL-10 seems to play a protective role since delivery of this cytokine via adeno-associated virus leads to markedly decreased microgliosis and astrogliosis as well as reversed cognitive impairment in transgenic AD mice (145), although the use of a different adeno-associated virus approach generates a different outcome (146). There is a growing interest on the role of complement and microglia in AD pathology (147). Microglia cells have prominent functions in complement-mediated synaptic pruning, in the postnatal period (148,149). It is hypothesized that an inappropriate reactivation of this mechanism later in life could result in synapse loss, thus facilitating the progression of ND (150). In this frame, C1q, which mediates the toxic effects of Aβ oligomers on LTP, is increased in synaptic connections before plaque deposition, and inhibition of C1q, C3, or the microglial complement receptor CR3 diminishes phagocytic microglia, resulting in protection against synapse loss (151). Investigations conducted in transgenic AD mice also address the effects of NSAIDs on amyloid load and inflammation (152). These studies suggest that NSAIDS not only exert neuroprotection through the suppression of inflammatory events but also reduce early amyloid pathology by mechanisms that remain unclear (153). Of note, two selective cyclooxygenase-2 (COX-2) inhibitors are found to be effective in rescuing LTP impairment by synthetic soluble Aβ 1−42 , whereas the same effect is not achieved with the cyclooxygenase-1 (COX-1) inhibitor piroxicam (154). Similarly, ibuprofen prevents early memory decline in AD model, and this effect is associated with activation of hippocampal plasticity-related genes (155). Overall, these studies indicate that NSAIDs exert neuroprotection and prevent memory decline through the modulation of multiple neuronal pathways (156). BIOMARKERS OF NEUROINFLAMMATION IN ALZHEIMER'S DISEASE Most of the failed AD clinical trials-including trials investigating anti-inflammatory compounds-did not assess any biological in vivo identification of AD-related pathomechanistic alterations, thus preventing proof of mechanisms (157) and including a percentage of subject displaying non-AD pathophysiology (158). Therefore, robust biomarkers-drug codevelopment pipelines are strongly recommended for next-generation clinical trials (159). Fluid Biomarkers of Neuroinflammation in Alzheimer's Disease Modifications of the concentrations of several cytokines (160)(161)(162)(163)(164) and other inflammatory biomarkers associated with either microglia-e.g., soluble TREM2 (sTREM2), monocyte chemoattractant protein-1 (MCP-1), and YKL-40 (165-168)or astroglia, e.g., YKL-40 (161), are extensively investigated in AD patients. These alterations, potentially, reflect the inflammatory mechanisms within the CNS coupled with the neurodegenerative pathways (11,166). A recent meta-analysis reports higher concentration of YKL-40, sTREM2, MCP-1, and TGF-β in the CSF of AD patients compared to controls (160). In particular, robust evidence from several studies focus on CSF YKL-40 that shows a fair classificatory capability in differentiating between AD individuals and controls as well as in predicting the progression from the asymptomatic to later prodromal and dementia stages (166)(167)(168). However, its function in differentiating subjects with AD and other dementia remains controversial since neuroinflammation seems to be associated with neurodegeneration tout court and not with specific neurodegenerative pathways (12,169). The clinical meaning of inflammatory biomarkers in blood needs to be elucidated, as they might represent low-invasive and low-cost screening tools of cerebral inflammatory activity during the early asymptomatic stages of AD (170)(171)(172). The main issue concerning the peripheral measurements of inflammatory biomarkers is that they may not directly reflect brain neuroinflammation (163). Nonetheless, IL-6 and IL-1β concentrations are significantly higher in AD compared to cognitively normal controls in four meta-analysis (160)(161)(162)(163). IL-1β is a key molecule participating in the inflammatory response, cell proliferation, differentiation, and apoptosis. Some evidence suggest that IL-1β is produced and secreted by microglia cells in response to Aβ deposition, thus resulting in chronic neuroinflammation and, eventually, neuronal disruption, dysfunction, and neurodegeneration (173,174). A negative correlation between CSF concentrations of this cytokine and cognitive scores has also been described in AD (175). IL-6 levels are associated with the severity of cognitive decline as assessed by Mini-Mental State Examination (MMSE) scores (161). Notably, peripheral IL-6 concentrations positively correlate with the cerebral ventricular volumes (176) and with matched CSF samples (177) in AD. The peripheral modifications of IL-6 levels could begin in the prodromal phase of AD; indeed, a recent meta-analysis highlights the greater IL-6 concentrations in MCI subjects compared to controls (160). In line with these findings, a longitudinal study reports the association of elevated plasma IL-6 levels with a greater risk of cognitive decay, at 2-year clinical follow-up. Other cytokines emerging as candidate peripheral inflammatory biomarkers are IL-2, IL-12, IL-18, and TGF-β (160)(161)(162)(163). Overall, these studies have several biases to consider. First, the risk of misdiagnosis is high since AD and MCI diagnoses are mainly clinical based in the majority of the studies lacking the necessary biomarker information [e.g., cerebral amyloidpositron emission tomography (PET) uptake or CSF Aβ 1−42 measurements]. This means that at least 20-25% of the AD patients and MCI subjects enrolled in the previous studies do not have cerebral amyloid deposition (6). Moreover, these studies are cross-sectional without an appropriate follow-up, and this could lead to incorrect MCI diagnosis. Indeed, the clinical picture of MCI is heterogeneous not only with a 10-15% annual rate of developing AD (178) but also with a consistent proportion of individuals who recover, remain stable, or develop ND other than AD (179). In addition, the MCI classification (e.g., amnestic or non-amnestic), which significantly impacts clinical outcome (8,179,180), is inadequately specified in most of the studies. Furthermore, data regarding comorbidities-such as cerebrovascular diseases, coronary diseases, atrial fibrillation, periodontitis, and diabetes or concomitant drugs (e.g., nonsteroidal inflammatory medications, corticosteroids, statins) that can significantly modify peripheral inflammatory biomarkershave been rarely reported. For instance, persistent higher plasma levels of IL-1β and IL-6 are observed in relation to cardiovascular diseases as well as atherosclerosis (181,182). Other potential biases include technical issues: detection methods (e.g., ELISA kits) for inflammatory biomarkers in biological fluids are consistently different among studies as well as sample handling approaches [e.g., measurements on different fluids matrix (plasma or serum) and storage protocols]. In conclusion, neuroinflammation is certainly a relevant pathophysiological mechanism of neurodegeneration in AD. However, we still lack reliable inflammatory biomarkers to be used in a screening context of use. In essence, sTREM2, MCP-1, IL-6, TGF-β, and, particularly, YKL-40 are interesting novel inflammatory CSF biomarkers, but they cannot be proposed in detecting the early asymptomatic phases of AD, as it would be altered with disease-modifying treatments. Prospective observational studies enrolling large cohorts of participants with accurate clinical and biomarker-based characterizations are needed to identify potentially effective inflammatory blood-based biomarkers of AD. PET Radiotracers Targeting Neuroinflammation in Alzheimer's Disease: State-of-the-Art on Human Studies There are several genetic association studies highlighting a key role of neuroinflammation in AD by demonstrating the occurrence of specific genetic variations related to immune response in patients with ND including AD (183). As a direct consequence, the possibility of tracking the regional evolution of neuroinflammation and imaging non-invasively the neuroinflammatory process in AD patients opens up exciting novel opportunities to monitor disease progression and, eventually, to explore immune-therapeutic strategies to prevent or decelerate disease progression. It is interesting to note that it could be possible to assess the neuroinflammatory status by conventional [ 18 F]fluorodeoxyglucose (FDG)-PET, provided that the whole uptake curve is studied (184). Neuroinflammation can be measured more specifically using targeted radio-ligands for PET imagingthat allow accomplishing the regional in vivo exploration of neuroinflammation-like [ 11 C]-PK11195 (185). A number of studies show alterations in [ 11 C]-PK11195 binding in AD and several other ND (186)(187)(188)(189), Parkinson's disease (190), and progressive supranuclear palsy (189,191), and the distributions of [ 11 C]-PK11195 found in these studies are akin to the wellknown distribution of neurodegeneration (e.g., posterior cortical regions in AD). However, one should also note that translocator protein (TSPO) gene polymorphisms can greatly affect binding affinity (192), and TSPO expression is not circumscribed to activated microglia, which can also occur on astrocytes, or endothelial cells (193). In this context, a number of novel TSPOspecific PET radiotracers are currently available, both carbon-11 (i.e., (194,195). In addition, a very recent tracer named [ 18 F]-FEPPA is able to provide a high potential for TSPO-PET in humans (192). Interestingly, microglia activation is only one (albeit important) part of the chain of events that eventually lead to neuroinflammation and that can potentially be imaged with even more specific tracers. For example, protein misfolding, aggregation, and accumulation may trigger glial response and, therefore, neurotoxicity. To date, the causal relationships between neuroinflammation and other pathogenetic mechanisms of AD is not elucidated yet. PET radiotracers can represent a suitable tool for untangling these dynamics along the roadmap of discovering new targets for anti-inflammatory diseasemodifying strategies. In this context, a number of specific PET tracers can target protein aggregates in the brain. For example, the [ 11 C]-Pittsburgh compound-B ([ 11 C]-PIB) is able to bind Aβ fibers (186,190). Aβ can also be imaged through, e.g., ( (196). In addition, hyperphosphorylation and abnormal aggregation of tau, which is crucial to neuronal activity, can be imaged using definite tracers, using specific ligands: T807, Flortaucipir as well as the phenyl/pyridinyl-butadienyl-benzothiazole/benzothiazolium derivative PBB3 (197,198). Additionally, tracers [ 18 F]-FA and [ 18 F]-EFA (analogs of 2-fluoroacetate, which can be utilized to inhibit glial cell metabolism) are able to selectively enter the metabolic compartment (199) and may, therefore, be promising candidates for evaluating glial metabolism when thinking of astrocytic response. Finally, there are other molecular targets that can offer a more exhaustive depiction of in vivo neuroinflammation (200). For example, the cyclooxygenase (COX) enzyme is involved in both inflammation and generation of proinflammatory mediators. In this context, COX-1 radioligands, like [ 11 C]-KTP-Me, show promising results in AD animal models (201). In addition, the cannabinoid receptor type 2 (CB2R) is subject to upregulation in activated microglia in various ND (202), possibly in conjunction with a neuroprotective effect (203), and postmortem studies emphasize the potential of compounds like [ 11 C]-RS-016, which show high specific binding (204). This is emphasizing the role of CB2R as an additional potential target for PET imaging of neuroinflammation, in humans. Further encouraging targets examined in preclinical examinations are the purinergic receptor P2X7 ([ 11 C]-GSK1482160) (205) and the adenosine receptor A2AR (i.e., [ 11 C]-TMSX). WHY DID ANTI-INFLAMMATORY THERAPY FAIL IN ALZHEIMER'S DISEASE? Clinical Trials of Anti-inflammatory Drugs in Alzheimer's Disease NSAIDs have long been hypothesized to play a protective role in AD. This assumption is reinforced by several cohort analyses. A recent meta-analysis including 16 investigations demonstrate that present or previous utilization of NSAIDs is linked to a decreased relative risk of AD (0.81; 95% confidence interval, 0.70-0.94) (206). Despite the observational epidemiological data suggesting a protective effect of NSAIDs and the evidence for a biologically plausible role for anti-inflammatory treatment, all placebo-controlled trials of a wide range of anti-inflammatory agents (NSAIDs, corticosteroids, and others) in both mild-tomoderate AD patients ( Table 1) and MCI subjects ( Table 2) are negative. Studies in cognitively normal subjects at risk of developing AD are also negative ( Table 3). The first, large primary prevention study of naproxen and celecoxib [Alzheimer's Disease Anti-inflammatory Prevention Trial (ADAPT)] has been prematurely interrupted for cardiovascular safety concerns after the enrollment of 2, 528 subjects in the study and their treatment for a median time of 2 years. The study is not able to support the hypothesis that either drugs could postpone AD beginning in adults with a family history of dementia (227). A subsequent 2-year, primary prevention trial [Impact of Naproxen Treatment in Presymptomatic Alzheimer's Disease (INTREPAD)] has been used to compare the effects of naproxen and placebo on the Alzheimer Progression Score (APS) in 195 cognitively normal older persons with a positive family history of AD (226). Over time, the APS scores progressively increase to a similar extent in both study groups, thus suggesting that naproxen does not provide any benefit over placebo in slowing the progression of presymptomatic AD. Stage-Dependent Neuroinflammatory Process in the Alzheimer's Brain In spite of emerging epidemiological evidence, all large, longstanding, randomized, placebo-controlled investigations aiming at attenuating cerebral inflammation in AD display negative outcomes. The fact that anti-inflammatory therapies are not able to safeguard patients with overt dementia has been debated. Actually, a trial recruiting MCI individuals highlights that rofecoxib could rush the conversion to AD (223). Moreover, a primary prevention study involving celecoxib and naproxen in cognitively healthy elderly individuals showing family history of AD has been terminated in advance due to the existence of negative or harmful effects generated by the drugs (225,228). Additional longstanding, controlled studies examining anti-inflammatory drugs, including tarenflurbil in mild AD patients (220), prednisone (217), and celecoxib (208) in mild-to-moderate AD patients, report the presence of detrimental consequences vs. placebo. NSAIDs negative and/or harmful effects, documented in AD, MCI, as well as in the stages preceding AD, are apparently in conflict with epidemiological data indicating diminished AD incidence after sustained treatment with NSAIDs. This is potentially related to the different impact of the disease stages on NSAIDs exposure. In this context, two different inflammatory responses in the AD pathophysiological process are assumed to exist: (I) one, at the early preclinical stage, with a predominantly proinflammatory component that is amenable to therapy; (II) another, at a later clinical stage, with predominantly innate/adaptive immune reactions not responsive to anti-inflammatory therapy (19). During the early inflammation stage, neurons stimulated by Aβ initiate the inflammatory process, and then, they induce intermediate microglia cells activation and their recruitment around Aβ-burdened neurons. Both neurons and microglia elicit a process exacerbating the disease characterized by the release of proinflammatory mediators (cytokines and chemokines) (Figure 3). The inflammatory immune response of the late plaque-associated stage involves different processes, including full microglial activation, microgliosis, and CNS invasion by peripheral monocytes. Both microglia and monocytes participate in phagocytic activities to eradicate toxic Aβ oligomers and, probably, cellular debris (19). This assumption is in line with data from the Rotterdam (229), the Cache County (230), and the US Veterans (231) observational studies. The above-mentioned analyses emphasize the lack of protection following 2-year NSAID exposure before dementia onset. In case the timing of exposure defines whether NSAID administration is beneficial or harmful, then the negative results of the previously mentioned studies (ADAPT and INTREPAD) are not unexpected, given that the timing of exposure of the participants to NSAIDs was restricted (2 years). On this basis, NSAIDs might be useful for AD prevention when their administration occurs years before the usual onset age; however, when used later in life, they might increase the risk of disease. We cannot exclude the possibility that (I) the majority of the advantageous effects of NSAIDs, documented from epidemiological studies, may originate from different types of bias (232), and (II) actually, there is no established impact of NSAIDs on AD prevention or treatment (233). Alector/Abbvie claim the generation of a monoclonal antibody (AL002) that binds and activates TREM2. AL002 entered its first phase 1 trial in 51 healthy adults and 16 AD patients (234). Alector is also starting its first trial of the anti-CD33 antibody, AL003. The microglial receptor CD33 opposes the effects of TREM2 signaling and may present a more amenable target because it would be inhibited rather than activated. In the first phase of the trial, 42 healthy adults will receive a single treatment of either placebo or one of seven different AL003 doses. The second, multiple-dose phase will enroll 12 AD patients, two of whom will receive placebo (234). Of note, other drugs targeting neuroinflammation to treat AD are being developed and underwent clinical testing. XPro1595 is currently undergoing phase 1b clinical trials. Other examples are GC021109 and NP001. XPro1595 is a variant of TNFα that forms heterotrimers with native soluble TNF-α and prevents its interaction with the type 1 TNF-α receptors (235). Unlike other non-selective TNF-α inhibitors, XPro1595 does not suppress innate immunity or myelination mediated by type 2 receptors (236). Differently from etanercept, longterm treatment with XPro1595 does not suppress hippocampal neurogenesis, learning, and memory in adult mice (237). In 5xFAD mice, twice-weekly subcutaneous administration of XPro1595 for 2 months reduced brain amyloid deposition and immune cell infiltration, and improved synaptic function (238). In young TgCRND8 mice, continuous subcutaneous infusion of XPro1595 for 1 month prevented brain amyloid deposition and normalized hippocampal neuron synaptic function (239). In 3xTg mice, intracranial administration of XPro1595 reduced amyloid pathology (240). In aged wild-type rats, intracranial infusions of XPro1595 for 6 weeks reduced microglia activation and improved synaptic function and cognition (241). A 12-week, open-label, phase 1b study of XPro1595 (weekly injections of 0.03, 1.0, or 3.0 mg/kg) is ongoing in 18 mild-to-moderate AD patients (NCT03943264). Participants were requested to have a positive amyloid test and evidence of peripheral inflammation [elevated blood C-reactive protein (CRP)]. Biomarkers of neuroinflammation in blood and CSF (CRP, TNF-α), IL-1β, and IL-6 are being measured. GC 021109 targets microglial cells by binding the P2Y6 receptor, a metabotropic G-protein-coupled receptor, whose natural ligand is adenosine diphosphate, a metabolite of ATP. Astrocytes release ATP in response to the presence of Aβ aggregates and P2Y6 signaling is thought to be involved in shifting the phenotype of microglia, which tend to surround amyloid plaques, from patrolling to phagocytic (242). GC 021109 has been reported in the biotech press to stimulate both microglial phagocytosis and inhibit microglial release of proinflammatory cytokines such as IL-12; however, this information is not published in the peer-reviewed literature. A phase 1a study in 44 healthy volunteers has been carried out in 2015 (NCT02254369), and a 4-week, phase 1b study in 39 mild-to-moderate AD was completed in 2016 (NCT02386306). However, no results were reported. NP001 is a pH-adjusted intravenous formulation of purified sodium chlorite. Within monocytes/macrophages, chlorite is converted into taurine chloramine that downregulates the nuclear factor kappa-light-chain-enhancer of activated B cells (NF-κB) expression and inhibits production of proinflammatory cytokine IL-1β. These mechanisms of downregulation transform inflammatory monocytes/macrophages from a proinflammatory to a basal phagocytic state. NP001 has been tested in patients with amyotrophic lateral sclerosis (243). A small study planned to be carried out in 14 mild-to-moderate AD patients (NCT03179501) was interrupted in 2018 for poor enrollment. Preliminary Evidence of a Potential Biological Effect of NSAIDs Profiling molecular pathways related to ND is expected to reveal novel pathways for therapeutic agents. In this context, inflammation represents a primarily involved pathway (12,19). Interestingly, a meta-analysis including 175 studies reports changes in several inflammatory biomarkers (IL-6, CRP, and TNF-α) in AD (161). Another meta-analysis including nine longitudinal studies shows a protective effect by NSAIDs against AD progress (244). Changes in the concentrations of blood (serum) inflammatory proteins-including IL-6, CRP, and TNFα-define a serum-based proteomic signature potentially useful for AD diagnosis (245)(246)(247). Hence, according to the literature, anti-inflammatory compounds might be employed as therapeutic agents in AD and others ND. In this regard, a novel model based on PM for targeted NSAIDs therapy to specific AD patients is recently proposed by O'Bryant and colleagues. In particular, they determine whether a blood proteomic companion diagnostic (CDx) is able to predict response to NSAID treatment (248). The analysis of the proteome in plasma samples from the Alzheimer's Disease Cooperative Studies (ADCS) anti-inflammatory clinical trial, including 1-year administration of rofecoxib (25 mg once daily), naproxen (220 mg twice daily), or placebo (N = 351) (215)-indicates that an overall NSAID-general CDx is accurate in detecting treatment response with 87% accuracy. Drug-specific companion diagnostics-Rofecoxib-CDx and Naproxen-CDxgenerate a very high degree of accuracy in both rofecoxib (98%) and naproxen (97%) arms (234). This is a relevant example of direct evidence for a precision medicine-based model to address AD treatment via the creation of CDx-driven therapeutics. Preliminary Evidence of a Potential Biological Effect of Monoclonal Antibodies Selectively Targeting Aβ Protofibrils In the last 20 years, a rising body of experimental studies has indicated that soluble Aβ protofibrils are more synaptotoxic than insoluble Aβ plaque cores. For instance, the former display higher rates of synapse structure impairment, including LTP, than plaques (249)(250)(251). Of note, solubilization of Aβ plaque cores is strictly related to the release of smaller Aβ species, such as dimers, and downstream increase in synaptotoxicity (252). Therefore, it is argued that prefibrillar Aβ 1−42 assemblies, rather than monomers or dimers, are the proximate mediators of Aβ toxicity (253). With regard to CNS immune resident cells, growing evidence indicates that small soluble Aβ 1−42 protofibrils are the main trigger of microglial activation. Indeed, experimental models of AD indicate that both microglia and astrocytes display not only a high sensitivity to Aβ structure (16,17,33,68,86) in the internalization process but also emphasize their greater affinity for soluble Aβ protofibrils than mature insoluble fibrils (254,255). In this context, it is reported that small soluble Aβ 1−42 protofibrils, rather than fibrils, can induce microglial activation, as reflected by increased cerebral levels of TNF-α (255). Several studies, employing mAb158, suggest that astrocytic Aβ uptake depends on size and/or composition of Aβ aggregates, since astrocytes, if possible, engulf oligomeric Aβ over its fibrillar aggregation states (258). Recent trials conducted in mice models of AD demonstrate that the antibody significantly slows down Aβ accumulation in astrocytes reducing the downstream Aβ-induced neuronal toxicity (256). The authors argue that their results provide a strong evidence for astrocytes to play a key mechanistic role in anti-Aβ immunotherapy. EXERCISE AS AN ANTI-INFLAMMATORY THERAPY IN ALZHEIMER'S DISEASE Acute, unaccustomed exercise (i.e., of an unusual duration and/or intensity) can increase oxidative stress and act as a proinflammatory stimulus (262,263). However, this response is attenuated when exercise is performed regularly, with strong evidence actually supporting that "chronic" exercise upregulates an endogenous systemic anti-inflammatory response (16). Large cohort studies indicate that higher levels of physical activity are inversely associated with inflammatory biomarkers, for instance CRP (264,265). There is meta-analytical evidence that regular physical exercise can reduce inflammation-related biomarkers (e.g., CRP, TNF-α) in middle-aged and older adults (266,267), these benefits being also present in individuals with cognitive impairment (268). Animal research indicates that the anti-inflammatory effects of exercise can also reach the brain tissue. Physical exercise training results in an enhanced antiinflammatory status-as reflected by an increased expression of anti-inflammatory cytokines including IL-10β coupled with the decrease in proinflammatory cytokines (including TNFα)-at the hippocampus level in a rat model of AD (269). Chronic exercise also promotes a conversion of the microglia from the proinflammatory (M1) to the anti-inflammatory (M2) phenotype in different rodent models of disease, including AD (269)(270)(271)(272). Although the mechanisms underlying exercise antiinflammatory effects remain to be clearly elucidated, several pathways are currently proposed. Notably, contracting muscles act as an endocrine organ by releasing myokines (i.e., cytokines and other small peptides) to the bloodstream, which, in turn, induce numerous health benefits (such as a decrease in inflammation) at the multisystemic level, including the brain (273,274). Muscle-derived IL-6 promotes the systemic production of anti-inflammatory cytokines (IL-1Ra, IL-10) and downregulates the expression of proinflammatory cytokines (TNF-α, IL-1β) (275). Other proposed mechanisms include exercise-induced reductions in adiposity (which, especially visceral fat, contribute to systemic inflammation), on the one hand, and increases in vagal tone, on the other hand, through the cholinergic anti-inflammatory pathway, an evolutionarily ancient circuit that modulates immune responses and the progression of inflammatory diseases (276,277). In conclusion, given the documented relevance of inflammation in most ND (169), there is strong biological rationale to support that exercise might serve as a coadjuvant therapeutic strategy against such conditions. General Overview on Precision Medicine The official launch of the US Precision Medicine Initiative (PMI), in 2015 (https://obamawhitehouse.archives.gov/ precision-medicine), by the US President Obama followed by the National Institutes of Health (NIH) development of the US PMI Cohort Program (PMI-CP) (278) and the creation of the US "All of Us Research Program" (available at https:// allofus.nih.gov/) are contributing to make precision medicine one of the key topics in biomedical research, worldwide. These facts support the evolution of Medicine from the outdated "one-size-fits-all" paradigm-according to which treatments are conceived for the "average patient"-to the search for comprehensive and accurate stratification of individuals and future individually tailored therapeutic modalities and targeted therapies (279). Indeed, genetic and biological heterogeneity among individuals sharing the same clinical features (so-called clinical syndrome) is highly frequent in polygenic, multifactorial diseases with complex and non-linear pathophysiological dynamics, such as cancer and AD. In this regard, it is acknowledged that the adaptive and innate immune systems are characterized by enormous individual heterogeneity that accounts for the subject-specific response to vaccines and other immunomodulatory therapies (280)(281)(282). As a result, some drugs, regularly administered, can be of benefit only to a restricted subset of patients; other drugs might even have detrimental effects to some definite ethnic groups (283). Hence, the identification of the molecular/cellular and environmental factors indicating the presence and the type of reaction of a single AD patient to a specific therapy is crucial (284). The shift to individualized therapies and targeted treatments needs exploratory, unbiased, high-throughput, integrative, large-scale analyses of the features of the cohort's individuals with the disease (279,284,285). Cohorts stratified according to different multimodal-throughput technological platforms ("omic" sciences)-via systems biology (285,286)-and different neuroimaging modalities-via systems neurophysiology (285)can be assimilated in the disease modeling to stratify and predict AD patient subgroups (279,285). Both systems biology and neurophysiology enable to perform a holistic, systemic exploration of complex interactions in biological systems, thus allowing an overview of cells/groups of cells, tissues, organs, organisms, and populations at multiple scales. High-throughput, integrative approaches permit to recover exhaustive biological information, supported by advanced powerful bioinformatics. This will enable the inclusive integration of both multiomic and clinical data to attain fast and significant interpretation. Precision medicine capitalizes on these theoretical and technological advancements (287). Particularly, the integration of the "omics" and the development of the "multiomic" disciplines-such as proteogenomics, whereby the involved technologies are next-generation sequencing and mass spectrometry-seem to be able to offer a substantial support for accurate phenotype prediction, individualized patient management, and precision medicine (288,289). Establishing precision medicine needs the implementation of a network of integrated disciplines and methods including the "omic" sciences, neuroimaging modalities, cognitive examinations, and clinical features. All these congregate toward many domains investigated using the systems theory approach (290). This allows the development of models explaining all systems levels-explored via systems biology and systems neurophysiology-and the different categories and scales of spatiotemporal data describing the complexity and clinical heterogeneity of any polygenic disease belonging to any medical fields, from oncology, to immunology (284) (Figure 4), to neurology (285,291,292). Precision medicine aims at ameliorating the efficacy of prevention strategies and therapies using customized treatments tailored on the individual's "biological make-up" (285,291,292), based on the "P4 Medicine" (P4M) framework (293). To safeguard the rapid and full expansion of precision medicine in AD, the international Alzheimer Precision Medicine Initiative (APMI) and its own Cohort Program (APMI-CP) (available at https://www.apmiscience.com/)-thematicall associated with the US PMI and the US "All of Us Research Program"-are currently established and operational (279). In this connection, a therapeutic plan based on immune/inflammation modulation for a subset of AD and associated dementias is currently ongoing within the "Korean AD Research Platform Initiative Based on Immune-Inflammatory Biomarkers" (K-ARPI) (294). FIGURE 4 | A roadmap proposed toward personalized immunology. There exist both horizontal and vertical roadmaps toward personalized immunology. Vertically, to translate sample stratification to clinical therapies, it is necessary to utilize the state-of-the-art "Omics" analysis and network integration approaches to stratify patients into subgroups and then implement personalized therapeutic approaches to treat individual patients, which needs to overcome various types of barriers at different steps. Horizontally, it might be necessary to go through at least seven steps to enable personalized immunotherapies: (1) classic symptom-based approach, (2) deep phenotyping approach, (3) multilayer "Omics"-based profiling, (4) cell-type-specific "Omics," (5) state-specific "Omics," (6) single-cell "Omics" and dynamic response analysis of immune cells, and (7) integrated network analysis. Under the first layer (the so-called stratification layer), different colors of patients indicate individual patients with different cellular and/or molecular profiles, while brackets represent patient subgroups; under the second layer (the so-called technique layers), different small circles with distinct colors indicate different immune cells, while big circles represent patient (sub)groups; under the technique layers, the snapshot of microarray representing either microarray-based or RNA-seq-based transcriptome analysis; under the third layer (the so-called therapeutic layer), the syringes with different colors or tonalities indicate different therapeutic approaches; P1,..., Pn at step 7 designate different patients; G1, G2, G3, and G4 represent different genes, the arrows between them representing regulatory relationships. DEG, differential expression gene; FACS, fluorescence-activated cell sorting; KNN, K-nearest neighbors; PEEP, personalized expression perturbation profile; sc, single-cell; SSN, sample-specific network; SVM, support vector machine; TCR/BCR, T-cell receptor/B-cell receptor. From Delhalle et al. (284). Copyright© 2018, Springer Nature. Reprinted with permission from Creative Commons CC BY. S. CONCLUSIONS Systems theory/biology-based studies are needed to untangle the spatiotemporal dynamics of neuroinflammation and its related subcomponents. Biomarkers simultaneously tracking different molecular pathways (body fluid matrixes) along with brain neuroinflammation endophenotypes (neuroimaging markers) can untangle key temporal-spatial dynamics among glia, neuroinflammation, and other AD pathophysiological mechanisms. Implementing this approach will be necessary to fill the gap in the understanding of whether neuroinflammation represents a direct pathophysiological or compensatory mechanism or both, along the AD continuum. According to this assumption, a new pathway (mechanism)-based pharmacological model-intended to establish effective and functional biomarker-guided targeted and tailored treatments for preventive and neuroinflammation-freezing strategies-needs to be developed. AUTHOR CONTRIBUTIONS HH, AV, and SL designed the concept of the manuscript, provided supervision and assisted with the writing and content of the manuscript. AV and SL assisted in the preparation of the figures. All authors provided their contribution in writing the manuscript, critically reviewing the completed manuscript, and approved the submitted version of the manuscript. FUNDING This research benefited from the support of the Program PHOENIX led by the Sorbonne University Foundation and sponsored by la Fondation pour la Recherche sur Alzheimer. HH is an employee of Eisai Inc. During his previous work (until April 2019), he was supported by the AXA Research Fund, the Fondation partenariale Sorbonne Université and the Fondation pour la Recherche sur Alzheimer, Paris, France.
2020-03-31T16:38:32.966Z
2020-03-31T00:00:00.000
{ "year": 2020, "sha1": "2beecb79c0f7b54c631fcc3e11101d32401c540b", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2020.00456/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2beecb79c0f7b54c631fcc3e11101d32401c540b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
2326390
pes2o/s2orc
v3-fos-license
The ATILF-LLF System for Parseme Shared Task: a Transition-based Verbal Multiword Expression Tagger We describe the ATILF-LLF system built for the MWE 2017 Shared Task on automatic identification of verbal multiword expressions. We participated in the closed track only, for all the 18 available languages. Our system is a robust greedy transition-based system, in which MWE are identified through a MERGE transition. The system was meant to accommodate the variety of linguistic resources provided for each language, in terms of accompanying morphological and syntactic information. Using per-MWE Fscore, the system was ranked first for all but two languages (Hungarian and Romanian). Introduction Verbal multi-word expressions (hereafter VMMEs) tend to exhibit more morphological and syntactic variation than other MWEs, if only because in general the verb is inflected, and it can receive adverbial modifiers. Furthermore some VMWEs, in particular light verb constructions (one of the VMWE categories provided in the shared task), allow for the full range of syntactic variation (extraction, coordination etc...). This renders the VMWE identification task even more challenging than general MWE identification, in which fully frozen and contiguous expressions help increasing the overall performance. The data sets are quite heterogeneous, both in terms of the number of annotated VMWEs and of accompanying resources (for the closed track). 2 1 2 systems participated for one language only (French), and 5 systems participated for more than one language. 2 Some of the data sets contain the tokenized sentences plus VMWEs only (BG, ES, HE, LT), some are accompanied with morphological information such as lemmas and POS So our first priority when setting up the architecture was to build a generic system applicable to all the 18 languages, with limited language-specific tuning. We thus chose to participate in the closed track only, relying exclusively on training data, accompanying CoNLL-U file when available, and basic feature engineering. We developed a onepass greedy transition-based system, which we believe can handle discontinuities elegantly. We integrated more or less informed feature templates, depending on their availability in the data. We describe our system in section 2, the experimental setup in section 3, the results in section 4 and the related works in section 5. We conclude in section 6 and give perspectives for future work. System description The identification system we used is a simplified and partial implementation of the system proposed in Constant and Nivre (2016), which is in itself a mild extension of an arc-standard dependency parser (Nivre, 2004). Constant and Nivre (2016) proposed a parsing algorithm that jointly predicts a syntactic dependency tree and a forest of lexical units including MWEs. In particular, in line with Nivre (2014), this system integrates special parsing mechanisms to deal with lexical analysis. Given that the shared task focuses on the lexical task only and that datasets do not always provide syntactic annotations, we have modified the structure of the original system by removing syntax prediction, in order to use the same system for all 18 languages. A transition-based system consists in applying a sequence of actions (namely transitions) to incrementally build the expected output structure in a bottom-up manner. Each transition is (CS, MT, RO, SL), and for the third group (the 10 remaining languages) full dependency parses are provided. See (Savary et al., 2017) for more information on the data sets. usually predicted by a classifier given the current state of the parser (namely configuration). A configuration in our system consists of a triplet c = (σ, β, L), where σ is a stack containing units under processing, β is a buffer containing the remaining input tokens, and L is a set of processed lexical units. The processed units correspond either to tokens or to VMWEs. When corresponding to a single token, a lexical unit is composed of one node only, whereas a unit representing a (multi-token) VMWE is represented as a binary lexical tree over the input tokens. Every unit is associated with a set of linguistic attributes (when available in the working dataset): its actual form, lemma, part-of-speech (POS) tag, syntactic head and label. The initial configuration for a sentence The transitions of this system are limited to the following: (a) the Shift transition takes the first element in the buffer and pushes it onto the stack; (b) the Merge transition removes the two top elements of the stack, combines them as a single element, and adds it to the stack; 3 (c) the Complete transition moves the upper element of the stack to L, whether the element is a single token or an identified VMWE and finally (d) the Complete-MWT transition, only valid for multiword tokens 3 The newly created element is assigned linguistic attributes using basic concatenation rules that would deserve to be improved in future experiments: e.g., the lemma is the concatenation of the lemmas of the two initial elements. (MWT), acts as Complete, but also marks the element moved to L as VMWE. 4 Training such a system means enabling it to classify a configuration into the next transition to apply. This requires an oracle that determines what is an optimal transition sequence given an input sentence and the gold VMWEs. We created a static oracle using a greedy algorithm that performs Complete as soon as possible (i.e. when a non VMWE token or a gold VMWE is on top of the stack) and Merge as late as possible (i.e. when the right-most component of the VMWE is on top of the stack) (see Figure 1). Note that an oracle sequence is exactly composed of 2n transitions: every single token requires one Shift and one Complete, and each multi-token VMWE of length m requires m Shifts, m−1 Merges and a single Complete. The proposed system has some limitations with respect to the shared task annotation scheme. First, for now, our system does not handle embedded VMWEs (only the longest VMWE is considered in the oracle, and the transition system cannot predict embeddings). This feature could be straightforwardly activated as VMWEs are represented with lexical trees. Note also that the system cannot handle overlapping MWEs like take 1,2 a bath 1 then a shower 2 , since it requires a graph representation (not a tree). Experimental setup For replication purposes, we now describe how the system has been implemented (Subsection 3.1), which feature templates have been used (Subsection 3.2) and how they have been tuned (Subsection 3.3). Simple descriptions of the system settings are provided in Table 1. We thereafter use symbol B i to indicate the ith element in the buffer. S 0 and S 1 stand for the top and the second top elements of the stack. For every unit X in the stack or the buffer, we denote Xw its word form, Xl its lemma and Xp its POS tag. The concatenation of two elements X and Y is noted XY . Implementation For a given language, and a given train/dev split, we train three SVM classifiers (one vs all, one vs one and error-correcting output codes) and we select the majority vote one. 5 Note that some configurations only allow for a unique transition type, and thus do not require transition prediction. A configuration with a one token stack and empty buffer requires the application of a Complete, as last transition of the transition sequence. Similarly, a configuration with empty stack and non-empty buffer must lead to a Shift transition. During the feature tuning phase, for a few languages we added a number of hard-coded procedures aiming at enforcing specific transitions in given contexts. These procedures all use a VMWE dictionary extracted from the training set (hereafter the VMWE dictionary). For German and Hungarian, we noticed a high percentage of VMWEs with one token only. 6 We added the Complete-MWT transition for these languages, which we systematically apply when the head of the stack S 0 is a token appearing as MWT in the VMWE dictionary (cf. setting Q in Table 1). For other languages with long and discontinuous expressions, we used other hard-coded procedures that experimentally proved to be beneficial (setting P in Table 1). We systematically apply a Complete transition when S 1 lB 0 l or S 1 lB 1 l forms a VMWE existing in the VMWE dictionary. Moreover, an obligatory Shift is applied when the concatenation of successive elements in the stack and the buffer belongs to the VMWE dictionary. In particular, we test S 1 lS 0 lB 0 l, S 0 lB 0 l, S 0 lB 0 lB 1 l and S 0 lB 0 lB 1 lB 2 l. Feature Templates A key point in a classical transition-based system is feature engineering, where feature template design and tuning could play a very important role in increasing the accuracy of system results. Basic Linguistic Features First of all, depending on their availability in the working dataset and on the activation of related settings (cf. G and J in Table 1), we extracted linguistic attributes in order to generate features such as S 0 l, S 0 p and S 0 w where p, l and 5 The whole system was developed using Python 2.7, with 2,200 lines of code, using the open-source Scikit-learn 0.19 libraries for the SVMs. The code is available on Github: https://goo.gl/EDFyiM 6 These correspond mainly to cases of verb-particle (tagged VPC in the data sets) in which the particle is not separated from the verb. Code F Setting description B + use of transition history (length 1) C + use of transition history (length 2) D + use of transition history (length 3) E + use of B1 F + use of bigrams (S1S0, S0B0, S1B0,S0B1) G + use of lemma H + use of syntax dependencies I + use of trigrams S1S0B0 J + use of POS tag K + use of distance between S0 and S1 L + use of training corpus VMWE lexicon M + use of distance between S0 and B0 N + use of (S0B2) bigram O + use of stack length P -enabling dictionary-based forced transitions Q -enabling Complete-MWT transition Table 1: System setting code descriptions. The 'F' column indicates whether the setting is a featurerelated setting ('+') used by the classifiers or whether ('-') it is a hard-coded implementation enhancement. w stand for the lemma, the part of speech, and the word form respectively. The same features are extracted for unigrams S 1 , B 0 and B 1 (when used) (cf. E in Table 1). When enabled, the bigrams features for the pair XY of elements are XpYp, XlYl, XwYw, XpYl and XlYp. The trigram-based features are extracted in the same way. Basically, the involved bigrams are S 1 S 0 , S 0 B 0 , S 1 B 0 and S 0 B 1 (cf. setting F in Table 1), but we also added the S 0 B 2 bigram for a few languages (cf. N in Table 1). For trigrams, we only used the features of the S 1 S 0 B 0 triple (cf. I in Table 1). Finally, because the datasets for some languages do not provide the basic linguistic attributes such as lemmas and POS tags, we tried to bridge the gap by extracting unigram "morphological" attributes when POS tag and lemma extraction settings were disabled (cf. G and J in Table 1). The features of S 0 for such languages would be S 0 w, S 0 r, S 0 s where r and s stand for the last two and three letters of S 0 w respectively. Syntax-based Features After integrating classical linguistic attributes, we investigated using more linguistically sophisticated features. First of all, syntactic structure is known to help MWE identification (Fazly et al., 2009;Seretan, 2011;Nagy T. and Vincze, 2014). We therefore inform the system with the provided syntactic dependencies when available: for each token B n that both appears in the buffer and is a syntactic dependent of S 0 with label l, we capture the existence of the dependency using the features RightDep(S 0 , B n ) = T rue and RightDepLabel(S 0 , B n ) = l. We also use the opposite features IsGovernedBy(S 0 , B n ) = T rue and IsGovernedByLabel(S 0 , G) = l when S 0 's syntactic governor G appears in the buffer. Other syntax-based features aim at modeling the direction and label of a syntactic relation between the two top elements of the stack (feature syntacticRelation(S 0 , S 1 ) = ±l is used for S 0 governing/governed by S 1 ). 7 All these syntactic features (cf. H in Table1) try to capture syntactic regularities between the tokens composing a VMWE. History-based Features We found that other traditional transition-based system features were sometimes useful like (local) transition history of the system. We thus added features to represent the sequence of previous transitions (of length one, two or three, cf. settings B, C and D in Table 1). Distance-based Features Distance between sentence components is also known to help transition-based dependency parsing (Zhang and Nivre, 2011). We thus added the distance between S 0 and B 0 and the distance between S 0 and S 1 (cf. settings K and M in Table 1). Dictionary-based Features We also added features based on the VMWE dictionary automatically extracted from the training set. Such features inform the system when one of the focused elements (S i , B j ) is a component of a VMWE present in the dictionary (cf. L in Table 1). Stack-length Features Using the length of the stack as an additional feature (cf. O in Table 1) has also proven beneficial during our feature tuning. Finally, it is worthwhile to note that system settings (cf. Table 1) interact when used to generate the precise set of features. For instance if lemma extraction is disabled (code G) while bigram extraction is enabled (code F), the produced features for e.g. the S 1 S 0 bigram would not include the following features: S 1 lS 0 l, S 1 pS 0 l and S 1 lS 0 p. Feature Tuning We first divided the data sets into 3 groups, based on the availability of CoNLL-U files: (a) for BG, HE and LT only the VMWEs on tokenized sentences are available; (b) CS, ES, FA, MT and RO are accompanied by CoNLL-U files but without syntactic dependency annotations, and (c) the other languages are accompanied by a fully annotated CoNLL-U file. In the first tuning period, we tested the various configurations using three pilot languages (BG, CS, FR) representing one group each. In the latest days of the experiments, the set of languages tested was enlarged to all of them and systematic tuning was performed for every language. Table 2 summarizes the results of the system performance over all the languages proposed by the shared task. Each row of the table displays its per-MWE and per-token F-scores for a given language (identified by its ISO 639-1 code) for test dataset, on top of a 5-fold cross-validation (CV) per-MWE F-score on training dataset. The system settings are represented as a sequence of codes described in Table 1. Results We can observe that results are very heterogeneous. For instance, five languages (CS, FA, FR, PL, RO) are above 0.70 per-MWE F-score in the case of cross-validation, while seven languages (DE, HE, HU, IT, LT, MT, SV) are below 0.30. In general, we can see an approximative linear correlation between the number of training VMWEs and the performance. This suggests that the size of training datasets is not large enough as systems' performance does not converge. We note though that some languages like CS and TR reach relatively low scores given the size of training data, which shows the high complexity of this task for these languages. When comparing to the other shared task systems, we can observe that our system is the only one that handled all 18 languages, showing the and delta columns display the difference in F-score (times 10 −2 ) between our system and the best other system of the shared task for the current evaluation/language configuration. robustness of our approach. Moreover, evaluation using per-MWE F-score (i.e. exact VMWE matching) ranks our system first on all languages but two (HU:2nd:, RO:3rd), displaying an average difference of 6.73 points with the best other system in the current evaluation/language pair. Concerning per-token scores (which allow partial matchings), results are relatively lower: our system is ranked first for 12 languages (out of 18), with a positive average difference of 1.84 points as compared with the best other system. Such very enthusiastic results for per-MWE evaluations seem to show that our system succeeds more in considering a MWE as a whole. Further error analysis is needed to explain this trait, and in particular to check the impact of the Merge transition, which transforms sequences of elements into one. Related Work Previous approaches for VMWE identification include the two-pass method of candidate extraction followed by binary classification (Fazly et al., 2009;Nagy T. and Vincze, 2014). VMWE identification has also been performed using sequence labeling approaches, with IOBscheme. For instance, Diab and Bhutada (2009) apply a sequential SVM to identify verb-noun idiomatic combinations in English. Such approaches were used for MWE identification in general (including verbal expressions) ranging from contiguous expressions (Blunsom and Baldwin, 2006) to gappy ones (Schneider et al., 2014). A joint syntactic analysis and VMWE identification approach using off-the-shelf parsers is another interesting alternative that has shown to help VMWE identification such as light verb constructions (Eryigit et al., 2011;Vincze et al., 2013). Conclusion and future work This article presents a simple transition-based system devoted to VMWE identification. In particular, it offers a simple mechanism to handle discontinuity since foreign elements are iteratively discarded from the stack, which is a crucial point for VMWEs. It also has the advantage of being robust, accurate and efficient (linear time complexity). As future work, we would like to apply more sophisticated syntax-based features, as well as more advanced machine-learning techniques like neural networks and word embeddings. We also believe that a dynamic oracle could help increase results to better deal with cases where the system is unsure.
2017-04-19T23:17:27.546Z
2017-04-04T00:00:00.000
{ "year": 2017, "sha1": "95a82555e6a8dfbf7997e5d514f5a4d706182457", "oa_license": "CCBY", "oa_url": "https://www.aclweb.org/anthology/W17-1717.pdf", "oa_status": "HYBRID", "pdf_src": "ACL", "pdf_hash": "524dd11ce1249bae235bf06de89621c59c18286e", "s2fieldsofstudy": [ "Computer Science", "Linguistics" ], "extfieldsofstudy": [ "Computer Science" ] }
119353444
pes2o/s2orc
v3-fos-license
Effect of coupling on scheme of hysteresis jumps in current-voltage characteristics of intrinsic Josephson junctions in high- superconductors We report the numerical calculations of the current-voltage characteristics of intrinsic Josephson junctions in high- superconductors. The charging effect at superconducting layers is taken into account. A set of equations is used to study the non-linear dynamics of the system. In framework of capacitively coupled Josephson junctions model we obtain the total number of branches using fixed initial conditions for phases and their derivatives. The influence of the coupling constant \alpha on the current-voltage characteristics at fixed parameter \beta (\beta^2=1/\beta_c, where \beta_c is McCumber parameter) and the influence of \alpha on \beta-dependence of the current-voltage characteristics are investigated. We obtain the \alpha-dependence of the branch's slopes and branch's endpoints. The obtained results show new features of the coupling effect on the scheme of hysteresis jumps in current-voltage characteristics of intrinsic Josephson junctions. I. INTRODUCTION he phase dynamics in the intrinsic Josephson junctions (IJJ) has attracted a great interest because of rich and interesting physics from one side and perspective of applications from the other one. Different type of couplings between junctions, like inductive coupling in the presence of magnetic field [1], capacitive [2]- [3], charge-imbalance [4] and phonon [5] couplings determine a variety of currentvoltage characteristics (IVC) observed in high temperature superconductors ( HTSC). In [6] has been stressed that capacitive coupling takes various values in HTSC and layered organic superconductors, that is, the capacitive coupling is tunable in these systems. Based on this fact a study for the dynamics of the CCJJ model, focusing on the dependence of phase dynamics on the strength of the capacitive coupling constant has been presented in this paper. Yu II. MODEL AND NUMERICAL RESULTS In the CCJJ model the dynamics of the gauge-invariant phase difference l ϕ between superconducting layers l and 1 l + is described by the equation: where I and c I are the external dc current and the Josephson critical current , respectively. We proposed a fixed initial conditions method (FICmethod), which is based on determination of the initial conditions using the values of branch's slopes. By this method we simulate the IVC of IJJ under restriction that patterns of distribution of phase rotating junctions are symmetric [7]. For the case of 11 junctions at α =1, β =0.2, γ =0.5 we obtain the complete branch structure consisting of 45 branches with a different slopes. The influence of the coupling parameter α on the IVC of a stack of IJJ is demonstrated in Fig.1, where the IVC calculated at fixed initial conditions are shown at α =0.1, 0.5 and 1. The main features of this influence which we are concentrating on in this paper are the change of the slopes and the endpoints of branches. As seen in Fig.1, the resistive branches shift towards the higher voltage side (towards the outermost branch ) [8] and their's endpoints are increasing with increase in α . Fig.2 shows the α -dependence of the branch's slopes for some branches. The slope of the outermost branch (the all junctions are in the rotating (R-state)) does not depend on the value of the coupling constant. As we expected the slopes of the branches are getting close to the slope of the outermost branch, but this approaching is decreased with increase in Using the equations of CCJJ model [7] we obtain the analytical expression for the α -dependence of the slope n, taking into account the distribution of R-and O-junctions in the stack. For example, 2 2 (1,11) 1 1.5 3 10 (5, 6, 7) 1 4 2 The slope for the branch O(5,6,7) (junctions 5,6,7 are in Ostate) limits to the slope of the outermost branch at α → ∞ . As we mentioned before, the order of the branches in IVC is changed with increase in α . For example, the position of the branches 31 and 21 have changed. The coupling between junctions breaks the equidistance of the branch structure and it happens at enough small value of α . In general, each junction in O-state in the stack has its own α -dependence of the phase difference and that one, which has the most strong dependence, determines the α -dependence of the branch's endpoints. From the analysis of equation (1) and resistively shunted junction equation we find that, for example, for state O(5,6,7) the phase difference in junction 6 determines the endpoint. Mostly the α -dependences of the endpoints are monotonic, but in some cases the strongest α dependence of sin l ϕ is transformed from one junction to another one with increase in α . It leads to a broken dependence. For example, for branch 28 (state O(1,5,6,7,11)) α -dependence is determined by junction 6 at small α , but by junctions 1 and 11 at big α . The analytical dependence in this case has a form
2019-04-14T02:08:44.637Z
2005-07-04T00:00:00.000
{ "year": 2005, "sha1": "4ca32acc9924191ab1df1b6eb48d6fc879677f7c", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/0507076", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "619fcce206d173bfb40c3faf38085cc76a5b1535", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
253523356
pes2o/s2orc
v3-fos-license
ET-AL: Entropy-Targeted Active Learning for Bias Mitigation in Materials Data Growing materials data and data-driven informatics drastically promote the discovery and design of materials. While there are significant advancements in data-driven models, the quality of data resources is less studied despite its huge impact on model performance. In this work, we focus on data bias arising from uneven coverage of materials families in existing knowledge. Observing different diversities among crystal systems in common materials databases, we propose an information entropy-based metric for measuring this bias. To mitigate the bias, we develop an entropy-targeted active learning (ET-AL) framework, which guides the acquisition of new data to improve the diversity of underrepresented crystal systems. We demonstrate the capability of ET-AL for bias mitigation and the resulting improvement in downstream machine learning models. This approach is broadly applicable to data-driven materials discovery, including autonomous data acquisition and dataset trimming to reduce bias, as well as data-driven informatics in other scientific domains. INTRODUCTION Data-driven autonomous materials design has recently emerged as a new paradigm for materials discovery [1][2][3] . With large materials data and powerful informatics tools, this paradigm significantly accelerates the understanding of physical and chemical mechanisms in materials science [4][5][6] , accurate prediction of materials structures and properties [7][8][9][10] , as well as the design of materials with desired properties [11][12][13][14] . While the informatics tools, such as machine learning (ML) and design optimization models, hold a conspicuous position in these works, the data resources are as important 15,16 . The performances that the models can attain highly depend on the quality of data they are built upon. Data veracity entails a description of where and how data were collected, but less frequently is why (or why not) using certain data clearly articulated. Following the Materials Genome Initiative 17 , multiple materials data resources have emerged. The Materials Project 18 , Open Quantum Materials Database (OQMD) 19,20 , the Automatic Flow for Materials Discovery (AFLOW) 21 , and the Joint Automated Repository for Various Integrated Simulations (JARVIS) 22 are prominent examples. These platforms use high-throughput first-principles calculations to evaluate various properties for a wide range of materials (stoichiometric and defect-free) and make the data publicly available. Besides these centralized data resources, a growing portion of materials data is generated in various research projects, available from published papers (including their associated repositories) and platforms such as the Materials Data Facility 23 . These distributed data are increasingly utilized owing to data/text mining tools. However, it is common that materials data do not have uniform coverage for multiple reasons: (1) The candidate materials for database construction are selected among known structures or based on known structural prototypes, and lower symmetry structures are less explored than higher symmetry ones. (2) Most literature only reports compounds perceived to exhibit "good" properties based on the aspect of interest 24 , while the "unsatisfactory" results can also be valuable 25 3/35 Property simulation is easier for compounds that are structurally simple, and property measurement is simpler for compounds that are readily synthesizable and stable at ambient pressures and temperatures. These, among other factors, lead to bias in the materials data platforms. Data bias, a ubiquitous issue in data science, has been more recognized in the social science domain 26,27 but is often overlooked in physical sciences, including materials science. Just as it causes social inequity in social policy built upon that data, bias in materials data is harmful to datadriven materials modeling and design. Belviso et al. 28 demonstrated how bias in the chemistry space prevents an ML model from accurately predicting electronic bandgaps. Bias in the structure space is less explicit but also detrimental. An example is a bias in stability data among crystal structures, which we refer to as "structure-stability bias". Such bias hinders the modeling of phase stabilities, thus affecting the accurate prediction of microstructure. As Molkeri et al. 29 found, microstructure information is important for the modeling of various materials properties, therefore, the impact of structure-stability bias is not limited to stability itself but also on other properties. Although some attempts have been pursued to characterize bias on trained models post facto 30 or reduce the impact of data bias on model training 31,32 , few have addressed bias intrinsic to the data for which the models are trained and mitigated bias from the onset. The presence of bias in materials data may be inevitable since the distributions of properties are unknown and can be uneven in nature. Nonetheless, detecting the bias of datasets could alert users of their potential impact. As bias originates from uneven coverage of different materials families, it can be captured by examining the diversities of families in the data, which reflects the completeness of coverage. Moreover, by adding well-selected new data points, bias in a dataset can be reduced. Towards this end, the active learning (AL) method provides a way to sequentially select optimal data points guided by sampling strategies considering uncertainty, diversity, or performance [33][34][35] . AL-based 4/35 methods have been applied to accelerate materials discovery targeting high performance [36][37][38][39] and chemical uniqueness 40 , as well as to assess the selection of design space 41 . With a specially designed sampling strategy, AL can serve as a method for bias reduction. In this work, we propose entropy-targeted active learning (ET-AL) as a systematic approach to detecting and reducing materials data bias. We focus on the structure-stability bias in DFT-generated databases as a use case for demonstrating the approach. With information entropy as a diversity metric, we quantify the bias of stability by its diversity among structures. We then develop an active learning method with a sampling strategy towards increasing the diversity of stability of underrepresented structures, thus reducing the bias. We demonstrate the capability of ET-AL through experiments performed on existing datasets. We show that ET-AL provides a general method for mitigating bias in materials datasets and is also applicable in guiding the construction of materials databases, thus granting materials researchers access to low-bias data for machine learning. Data Bias Characterization For demonstration purposes, we retrieve two materials datasets: (1) structure and formation energy per atom of all binary intermetallic compounds among the elements Al, Ti, Cr, Fe, Co, Ni, Cu, and W from OQMD (denoted OQMD-8, size 2,953); and (2) all entries with elastic moduli available from the JARVIS classical force-field inspired descriptors (CFID) dataset 42 , cleaned as described in the Methods (denoted J-CFID, size 10,898). We show the distribution of formation energy per atom Δ of materials in the two datasets with respect to crystal system in Figure 1a-b. We consider compounds in the cubic, hexagonal, trigonal, tetragonal, and orthorhombic systems to be higher in symmetry than those of monoclinic or triclinic systems, because they possess one or more rotation 5/35 axes, and their unit cells have three or fewer free interaxial angles and lattice parameters. Among the seven crystal systems, the lower symmetry monoclinic and triclinic systems display higher distribution density in the more stable (lower Δ ) region. This observation contradicts the empirical rules that materials with higher symmetry (which are usually more close-packed and have higher coordination numbers) generally have higher stability 43,44 . Such contradiction is due to the imbalanced coverage of different crystal systems in the materials datasets, and we refer to this problem as "structure-stability bias". Without assuming any prior knowledge such as the correlation between symmetry and stability, are we still able to capture the bias? To that end, we first define the diversity of a dataset by recognizing that for values of a continuous variable , the diversity can be quantified by information entropy 45 where ( ) is the underlying probability density function of . Note the difference between diversity and uncertainty: whereas uncertainty describes the state of random variables with incomplete or unknown information, diversity is an attribute of an already known dataset. In general, we can group the data into clusters by any appropriate criterion and estimate ℎ( ) for every cluster from the values in the dataset, thus quantifying the diversity of in every cluster. Based on the observation from Figure 1a Next, we measure bias using a fairness criterion 46 , i.e., the difference in ℎ( ) between different clusters indicate the existence and level of bias. For our application, we will use crystal systems as natural clusters, and quantify the structure-stability bias via fairness of ℎ(Δ ). Figure 1c-d shows that ℎ(Δ ) captures the observed difference in diversities, thus reflecting the structure-stability bias. The comparison also shows that J-CFID is overall more diverse than OQMD-8, which is because it covers a much larger chemical space. But to measure biases of datasets, the difference of ℎ(Δ ) between different crystal systems within each dataset is the focus. Active Learning for Bias Mitigation With fairness in diversity as a measure, the data bias can be reduced systematically by adding data to the least diverse crystal system in a manner that increases its diversity in Δ . We develop the entropy-targeted active learning (ET-AL) algorithm ( Figure 2) to attain this automatically. In the active learning context, we refer to the materials with properties known and unknown as "labeled" and "unlabeled", respectively. The ET-AL algorithm iteratively picks a target crystal system (usually the least diverse one), selects an optimal unlabeled material that may improve ℎ(Δ ) of the system and adds it to the labeled data. The iteration terminates when a pre-specified criterion is satisfied, or all materials are labeled. Details of the algorithm and its implementation are provided in Methods and Algorithm S1. Schematic of the ET-AL algorithm for data bias mitigation. a, Overall procedure of ET-AL: a target crystal system is selected, then an unlabeled material is selected and labeled. The steps repeat until the stopping criteria are satisfied. b, The procedure of sample acquisition: a Gaussian Process (GP) model is trained with the labeled data and makes predictions for the unlabeled data. The predictive mean and variance of ℎ resulting from adding each material are inferred therefrom. Based on these, the optimal material is selected according to the sampling strategy and added to the labeled data. Experimentation and Demonstration As a demonstration of the ET-AL method, we conduct experiments on the J-CFID dataset. The overall procedure is illustrated in Figure 3a: we split the dataset into a test set, a labeled set with artificial bias, and an unlabeled set. We use ET-AL to augment the labeled set into a low-bias training set (marked ETAL) and create another training set (marked RAND) of the same size by randomly sampling from the unlabeled set. In addition to demonstrating that ET-AL effectively reduces the structure-stability bias, we show the impact such bias has by comparing supervised ML models for bulk modulus and shear modulus derived from the two training sets. From the remaining data, some entries are taken away to create an artificial bias and put into the unlabeled set together with randomly selected entries, in total U . The L entries remaining form a labeled set with significant bias. Two training sets are constructed by adding the same number of samples ( + ) from the unlabeled set to the labeled set, guided by ET-AL and randomly, respectively. b, Change of information entropy in every crystal system during ET-AL iterations. The initial information entropies are shown before iteration 0. In the experiment, we set L = 1,000, U = 4,898, and T = 5,000. The artificial bias is introduced by removing all tetragonal and trigonal materials with Δ > 0 and all orthorhombic materials with Δ < 0. With materials represented by graph embeddings (presented in Methods), ET-AL is applied to the dataset and runs for 985 iterations before termination. As Figure 3b shows, the introduced bias is captured by the diversity metric (the three manipulated crystal systems have relatively low initial ℎ), and mitigated by ET-AL. Moreover, through ET-AL, the dataset reaches a state where diversities of crystal systems are closer to each other as compared to the initial state, which is favored by the fairness criterion (also shown in Figure S1). A similar demonstration performed on the OQMD-8 dataset is presented in Figure S2 and Figure S3. Next, we investigate the effects of ET-AL on dataset distribution. We employ t-distributed stochastic neighbor embedding (t-SNE) 47 for dimension reduction of the graph embedding representations of J-CFID data into a 2-dimensional space. The low-dimensional embeddings acquired by t-SNE reflect the distribution of data in the structure space. In Figure 4a, we use these embeddings to show the coverage of the labeled dataset (see also Figure S4) and the ET-ALselected and randomly selected data (b-c). ET-AL guides sampling in the underrepresented regions (lighter shades in Figure 4a), as opposed to a nearly uniform coverage by random sampling in 10/35 To assess the impact of bias on property prediction, we train multiple supervised learning models on the two training sets, both of size 1,954, for predicting and from a set of physical descriptors (detailed in Methods). Each model is trained 30 times with different random states (controlling the initialization, feature permutation, etc., but not affecting training data), and the coefficient of determination ( 2 ) on the test set is recorded. Models include random forest (RF), gradient boosting (GB), neural network (NN), and support vector regression (SVR), among which RF and GB attain relatively better performances on the task. A potential reason for such performance difference is that the descriptors form heterogeneous tabular data, for which tree ensemble models have an advantage 48 . We summarize the performances of these ML models in Figure 5, from which we find that models derived from the ETAL dataset with reduced bias display systematically superior accuracies over those from the RAND dataset. In Figure 6, we mark the "most improved samples", i.e., testing samples for which the ML models' prediction accuracies using the ETAL training set show greater advantages compared to using the RAND training set. As observed, most of these samples are in the underrepresented regions 11/35 of the labeled set (low-density regions in Figure 4a). ET-AL's focus in these regions during sample selection (triangles in Figure 6 overlap with sampling points in Figure 4b) leads to the better accuracy of ML models trained on the ETAL dataset. These observations agree with the findings of Li et al. 49 : ML models trained on a biased dataset lack generalizability to underrepresented test samples. ET-AL provides a solution to the problem by reducing structure-stability bias, which improves the coverage of the dataset in the structure space, and thus facilitates downstream tasks such as ML modeling of mechanical properties and . Potential Applications The data bias metric and ET-AL method proposed in this work have a wide range of applications in materials discovery and beyond. First, researchers may examine and potentially reduce the bias in their datasets before developing data-driven models thereon or publishing the data. Second, ET-AL allows steering autonomous data acquisition in an unbiased way. This includes high- 12/35 throughput computation, as well as experiments such as self-driving laboratories 50 . Though we presented the structure-stability bias as an example, the method applies to other forms of bias as well. An application of particular significance is dealing with bias in materials data resources. Since new materials are continually added to the databases, ET-AL can fit in the pipeline to select the materials to add. In practice, however, some databases are so large that an observable effect of bias mitigation requires adding many new data points, and there are other considerations besides bias in database construction. In remediation, the information entropy-based bias metric can also guide trimming rather than expanding a database, i.e., selecting a less biased subset. The level of bias can be tuned according to the need of usage. Though originally proposed for materials data, ET-AL is generally applicable to other fields where large data are generated and curated for future reuse 51 . Protein database 52 for biomedical studies, and geometry datasets 53 for design and manufacturing studies are among a few examples. Bias in these data may lead to inaccuracies in parameter calibration, predictive modeling, or design optimization, and ET-AL enables detection and mitigation of the bias. CONCLUSION We highlighted the previously overlooked bias in materials data resources, which has an impact on a broad range of data-driven materials modeling and design studies. We proposed a generic metric for data bias based on diversity measured by information entropy, which successfully captures the structure-stability bias in datasets retrieved from widely used materials data platforms OQMD and JARVIS. We then formulated and implemented an entropy-target active learning (ET-AL) framework to automatically reduce bias in datasets by acquiring new samples. Through ablation studies, we demonstrated that ET-AL can effectively reduce the structure-stability bias, 13/35 thus improving data coverage in the structure space and increasing the accuracy of data-driven modeling of materials properties. We also note that as a generic framework, ET-AL's capability is not limited to materials databases. As the data-driven research paradigm has been adopted by various domains, and data bias is ubiquitous in almost every data system, we anticipate that the ET-AL method is applicable to a variety of scientific and engineering domains, to facilitate the curation of high-quality data and data-driven studies. Dataset Preparation Data collection and cleaning. The OQMD-8 dataset is retrieved from OQMD using its API implemented in the "qmpy-rester" package. The J-CFID dataset is downloaded from figshare.com 42 . Out of the >50,000 entries, the ones reporting positive and values are kept. Entries containing elements H, Tc, halogens (VIIA), noble gases (VIIIA), lanthanum family, and those with atomic numbers ≥84 are excluded. Figure S5 and Figure S6 show some statistics of the J-CFID dataset. 14/35 Besides graph embeddings, many other representations that can be derived from materials' crystal structures without knowing their properties are also compatible with ET-AL, examples include fragment descriptors 55 and tensor representations 56 . Information Entropy Information entropy of continuous-valued Δ is estimated from a discrete set of Δ values using the "differential_entropy" function from the scipy.stats package 57 . The software automatically selects a numerical method for entropy estimation 58 based on data size; details of the numerical estimation methods are described in Supplementary Materials. Active Learning Target system selection. In every iteration, all crystal systems with unlabeled sample(s) available are candidates. Systems that are sampled but not improved five consecutive times are excluded. Of the remaining candidates, the crystal system with the lowest ℎ(Δ ) is selected as the target. where the covariance matrix is inferred from the similarity between predictors using a kernel function, and the 2 term accounts for noise. Once trained, inputting an unseen predictor �, the model outputs not a single value but a predicted Gaussian distribution of the response �. Hence, GP is an uncertainty-aware machine learning model. We train GP models using the "ExactGPModel" module of the GPyTorch package 60 , with the graph embedding representation as predictors and Δ as response. 15/35 Monte Carlo inference. For each unlabeled material, the GP model provides a predicted distribution of Δ , from which we use the Monte Carlo method 61 to infer the resulting change in ℎ(Δ ) by adding the material, as illustrated in Figure 7. We thereby obtain the predictive mean and variance of ℎ for every unlabeled material, which are later used in the evaluation of the sampling criterion. is evaluated, where Δ( ) = ℎ( ) − ℎ cur is the difference between the predicted mean ℎ and the current ℎ; ( ) is the predicted standard deviation of ℎ; (⋅) and Φ(⋅) are the probability density function (pdf) and cumulative distribution function (cdf) of standard Gaussian distribution, respectively. The unlabeled material with the largest EI is selected, i.e., * = arg max EI( ). 16/35 More generally, the selection operates in batches, i.e., one or multiple unlabeled material(s) with large EI are selected in every iteration. A batch size of 1 is used in the implementation of this work, as the time for running an ET-AL iteration is negligible compared to the acquisition of a new datapoint (e.g., first-principles calculation, experimental measurement). On the other hand, data acquisition can be parallelized. In that case, ET-AL can be easily configured to use a larger batch size, thus further improving computational efficiency. Stopping criteria. In our experiments on the J-CFID dataset, the active learning process is terminated when the target crystal system has the highest ℎ(Δ ) of all systems. In that case, improving its ℎ will worsen the fairness of diversity. However, when evaluation (e.g., firstprinciples calculation) of new materials is feasible, the existence of materials improving ℎ of the least diverse system is almost guaranteed. In such application scenarios, stopping criteria may be specified according to resources and budget, or may not be needed. 31/35 Table S1. Tuned hyperparameters of random forest and gradient boosting models for predicting and . "^" denotes that the effect of a hyperparameter is found to be insignificant, hence, the default setting is used. Metric selection One definition of data bias is "unjustifiable concentration on a particular part" 1 . In the context of this work, the "parts" can be regions in a space broadly defined by composition, (micro)structure, property, processing, or energy-based descriptions. As an example, the structure-stability bias arises from concentration (uneven coverage) in materials with certain structures (crystal systems) and stability (Δ ). Such uneven coverage can be captured by the different diversities of stability among different crystal systems. The information entropy ℎ( ) = − � ( ) log ( )d has been widely adopted as a metric for diversity 2 . Its numerical nature and simplicity in the calculation are desirable as a target in active learning. Also widely adopted as a numerical diversity measure is the determinantal point processes (DPP) 3 . However, DPP is evaluated pairwise, thus lacking scalability to large materials databases. Another seemingly applicable metric is the conditional information entropy Information entropy estimation For a continuous random variable , evaluation of its information entropy requires the probability density function (pdf) ( ). However, given a discrete set of values { } =1 , the underlying pdf is not obtainable. We use the numerical estimations implemented in scipy.stats, with the "auto" setting. Specifically, for 11 < 33/35 ≤ 1000, the result is given by the ( , ) estimator presented by Ebrahimi et al. 4 ; while for > 1000, the Vasicek estimator 5 is used, where , the window size, is defined by �√ + 0.5� in both cases. Graph embedding The crystal graph convolutional neural network (CGCNN) 6 predicts materials properties from the graph representation of the crystal structures. The input includes node feature vector(s) that encode atomic properties, and edge feature vector(s) that encode connections between atoms, both can be obtained from crystallographic information framework (CIF) files without knowing other properties. We retrieve a CGCNN model pretrained to predict the formation energy per atom (Δ ) on the Materials Project dataset 7 . We feed the CIF files of materials in the J-CFID to the pretrained model and obtain the activations of the last but one layer of neurons. These 32-dimensional vectors (graph embeddings) are used as representations of J-CFID materials structures. Note that graph embeddings do not have direct physical meanings, but they are generally obtainable for any given crystal structure. Physical descriptors In supervised machine learning (ML) of mechanical properties, we use physically meaningful descriptors of materials as input. The descriptors include ones defined in the Magpie ML framework 8 , the Ewald energy per atom, and volume per site. The Magpie descriptors set include the minimum, maximum, range, mean, average deviation, and mode of features such as Mendeleev number, atomic weight, and covalent radius of elements/atoms in a compound. A complete list can be found in the data files open-sourced along with the code. The Ewald energy per atom is obtained using the "analysis.ewald" module of the pymatgen package 9 ; the other descriptors are obtained using the "featurizers" module of the Matminer package 10 . J-CFID data splitting In preparation of the data for experiments, we first split the J-CFID dataset of size 10,898 to a test set of 4,898 datapoints and an "experiment" set of 6,000 datapoints. Due to the large size and randomness in 34/35 dataset splitting, the test set has a similar level of bias as the original J-CFID dataset. From the experiment set, we take (1) all tetragonal and trigonal materials with Δ > 0 and (2) all orthorhombic materials with Δ < 0, then randomly draw datapoints from the remaining data, to form an unlabeled set of size 5,000. The other 1,000 datapoints make the labeled set. Algorithm Analysis The
2022-11-16T06:42:18.190Z
2022-11-15T00:00:00.000
{ "year": 2022, "sha1": "cba5b6f1bed4f630b648ec537ebe5d8a23918d8e", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ad9349d5832a9f47330604c0ffcae9d1ef4e6c6a", "s2fieldsofstudy": [ "Materials Science", "Computer Science" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
260793097
pes2o/s2orc
v3-fos-license
Fetal cephalhematoma - an unusual antenatal presentation of a common neonatal scalp swelling posing a diagnostic challenge Cephalhematoma is an accumulation of blood in the subperiosteal space. While cephalhematoma is a well-documented postnatal occurrence, antenatal presentation is quite rare. This case report focuses on a rare presentation of fetal scalp swelling in a routine 32-week antenatal scan of a 38-year-old female. The swelling resolved spontaneously after birth. Awareness of this atypical manifestation is crucial for the radiologist to consider it in the differentials and for the obstetrician in providing appropriate prenatal care and avoiding unnecessary drastic interventions. The aim is to elucidate the diagnostic challenges and clinical management of this unique presentation. Introduction A cephalhematoma is an accumulation of subperiosteal blood, predominantly in the occipital or parietal region of the scalp. It is commonly due to the rupture of blood vessels crossing the periosteum due to the compression on the fetal head during labor. Rupture of vessels during labor due to pressure on the skull or the use of forceps or a vacuum extractor leads to a col-✩ Competing Interests: The authors declare that they have no known competing financial interests or personal relationships that could have to influence the work reported in this paper * Corresponding author. lection of blood. The etiology of antenatal presentation could be due to chronic pressure of fetal head against pelvic bones. Ultrasound showed a single live fetus of 32 weeks 6 days in cephalic presentation. Fetal head appeared to be dolichocephalic with focal increased extracalvarial hypoechogenic collection in the right parietooccipital region, measuring 8-9 mm in max dimension ( Fig. 1 ). No obvious intracranial finding was noted. Emergency cesarean section was done due to premature labor. Neonatal physical examination showed a right parietal scalp swelling which was hard and was not fluctuating. Neonatal head scan on postnatal day 1 showed a hypoechoic swelling in the right parieto-occipital region measuring 9-10 mm in maximum thickness ( Fig. 2 A) which was limited by sutures with no intracranial communication and no vascularity within. No intracranial bleed was noted on postnatal ultrasound scans. Case presentation Subsequent ultrasound scan on day 4 showed significant resolution of swelling to 4 mm in thickness ( Fig. 2 B) and Skull radiograph ( Fig. 3 ) with neurologist examination on day 10 showed near complete resolution of the scalp swelling. Blood tests were unremarkable. Neonatal MRI brain done after a month was normal. In view of the antenatal scalp swelling which reduced on day 4 of postnatal cranial ultrasound with subsequent resolution in 10 days and no intracranial abnormality, the possibility of antenatal cephalhematoma was concluded. It was probably due to chronic antenatal trauma of compression of fetal head pressure against the pelvic bones. Discussion Cephalhematoma is caused by the shearing forces on the skull and scalp resulting in the separation of the periosteum from the underlying calvarium and subsequent rupture of blood vessels leading to gradual collection [1] . Neonatal Cephalohematomas are seen in 1%-2% of spontaneous vaginal deliveries and 3%-4% in forceps or vacuumassisted deliveries [2] but antenatal presentation is very rare. This is the second reported case in literature to the extent known. Common causes of cephalohematoma include a prolonged second stage of labor, Macrosomia, cepholopelvic disproportion, abnormal fetal presentation, instrument-assisted delivery with forceps or vacuum extractor, and multiple gestations [1] . In our case, we hypothesize that cephalhematoma resulted due to chronic trauma to the fetal parieto occiput due to its position against the bony pelvis and ischial tuberosity [3] . Possibility of a previous cesarean scar as an additional contributory factor cannot be confirmed. Cephalohaematomas can be unilateral or bilateral usually bounded by suture lines except in the presence of craniosynostosis [2] . Most resolve spontaneously in weeks but can take a maximum of 3-4 months [2] . Differential diagnosis of an occipital mass in fetal sonography: • Small meningocele Fig. 1 -(A, B, C) Grayscale and Doppler ultrasound of a 32-week-old fetus with focal parietooccipital soft tissue swelling in the scalp with no color flow. Conclusion In conclusion, once a scalp mass has been found in antenatal sonography, cephalhematoma should be considered in the differential diagnosis which can be confirmed by ultrasound and MRI. This can help to decide the timing and method of delivery and thus reduce the risk of traumatic hemorrhage during vaginal delivery. The review highlights the limited number of documented cases reporting antenatal cephalhematomas. The study presents the clinical characteristics, diagnostic dilemmas, and potential differential diagnoses associated with this condition. Imaging techniques, such as ultrasound and magnetic resonance imaging, are explored as valuable tools in confirming the presence of cephalhematoma during pregnancy. Further research and reporting of cases are essential to deepen our understanding of this rare but selflimiting condition. Awareness of this condition which resolves spontaneously in a few days after birth is of utmost importance to avoid drastic interventions and to optimize management. Patient consent Informed Recorded consent was obtained from the patient and she agreed that she accepted that the medical records, including radiology images were to be utilized for research and publication in medical journals.
2023-08-11T15:07:29.096Z
2023-08-09T00:00:00.000
{ "year": 2023, "sha1": "dfe4cf750e4e9442bf0d4590ccbd3c56d250e5be", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.radcr.2023.07.055", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "2c71eb2242cf870eecd904afadd3f443f7990147", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
92019973
pes2o/s2orc
v3-fos-license
Evaluation of the Water-Storage Capacity of Bryophytes along an Altitudinal Gradient from Temperate Forests to the Alpine Zone Forests play crucial roles in regulating the amount and timing of streamflow through the water storage function. Bryophytes contribute to this increase in water storage owing to their high water-holding capacity; however, they might be severely damaged by climate warming. This study examined the water storage capacity (WSC) of bryophytes in forests in the mountainous areas of Japan. Sampling plots (100 m2) were established along two mountainous trails at 200-m altitude intervals. Bryophytes were sampled in these plots using 100-cm2 quadrats, and their WSC was evaluated according to the maximum amount of water retained in them (WSC-quadrat). The total amount of water in bryophytes within each plot (WSC-plot) was then calculated. The WSC-quadrat was affected by the forms of bryophyte communities (life forms) and their interactions, further influencing soil moisture. The WSC-quadrat did not show any significant trend with altitude, whereas, the highest WSC-plot values were obtained in subalpine forests. These changes to WSC-plot were explained by large differences in bryophyte cover with altitude. As the WSC controlled by the life forms might be vulnerable to climate warming, it can provide an early indicator of how bryophyte WCS and associated biological activities are influenced. Introduction Forests play crucial roles in regulating the amount and timing of streamflow by mitigating the effects of precipitation via their water storage function [1,2].The influence of afforestation on streamflow has been shown by many studies; for example, afforestation of grasslands and shrublands resulted in a loss of one-third to three-quarters of streamflow on average [3].These roles contribute to the water cycle in forest ecosystem that has diverse ecological functions, such as a source of water supply, flood and erosion control, conservation of biodiversity, and climate stabilization [4][5][6]. The storage of water in forests is achieved by the interaction of plant groups with vertical stratification [7].Forest canopies intercept rainfall and reduce the impact of rain drops on the ground [8].Under these canopies, epiphytes on trees store rainfall and fog temporarily [9], while the roots of trees and grasses improve the infiltration of soil, reducing runoff and soil erosion [10,11].In areas with less developed forest canopies, grass and biological soil crusts (assemblages of bryophytes, lichens, algae, cyanobacteria, and fungi) strongly affect the process of water infiltration into the soil, and partly control the amount of run off [12,13]. In recent years, there is serious concern about the changes of water storage in forests because of the influence of climate warming on vegetation [14][15][16].The rapid shift in species distribution by the global rise in temperature is evidenced by species both expanding to newly favorable locations and declining in unfavorable areas.The estimated shift of species to higher elevations is at a median rate of 11.0 m per decade, while the shift to higher latitudes is estimated at 16.9 km per decade [17].Under current climate change scenarios, one-tenth to one-half of global land might be highly or very highly vulnerable [18].Temperate mixed forests, boreal conifer forests, tundra, and alpine biomes are considered the most vulnerable biomes to these changes [18].The changes of forest ecosystems in response to climate warming might alter water yield, impacting water supply for human consumption [16]. The response of vegetation to climate warming differs according to plant groups.Among plant groups, serious damage to bryophytes may arise due to their sensitivity to these changes because of their poikilohydric properties [19].The water content of bryophytes is highly dependent on their external environment, decreasing rapidly when temperature rises and humidity drops [19,20].The decrease in the water content leads to a shorter period of metabolic activity and tissue damage caused by drought stress [20].As a result, bryophytes, especially those currently growing in environments with low drought stress, are sensitive to climatic warming that causes both thermal and drought stress [20,21]. Although bryophytes are vulnerable to climate warming, they play important roles in increasing the water storage capacity of forest ecosystems [9,[22][23][24][25][26][27].Because of their poikilohydric properties, bryophytes can retain relatively high amount of water within the community, ranging between approximately 200% and 3000% of their dry mass [25,28,29].Their water storage capacity is severely reduced once they are water saturated [9]; however, they contribute towards buffering the influence of rainfall on forest ecosystems, especially at the beginning of rainfall events [9].This buffering function might be more important as the frequency of heavy rainfall in short term are expected to increase due to climatic change [30].Regarding the water storage function of bryophytes, this is well documented in epiphytes occupying montane cloud forests [9,[22][23][24][25][26][27].For example, in tropical cloud forests, epiphytic bryophytes are estimated to store ca.3-3.5 mm of rainfall [23,25] whereas epiphytes (bryophytes and lichens) store 1.2-1.4mm in an old Douglas-fir forest [22].The contribution of bryophytes to total rainfall interception is estimated to be 6% in tropical montane forests [24]. Unlike epiphytic bryophytes, the water storage function of forest floor bryophytes (bryophytes on the ground or on logs) is poorly known.Nevertheless, these bryophytes exhibit higher maximum values of water storage when compared to epiphytic bryophytes [22], contributing to the forest water cycle.In addition, the cover of forest floor bryophytes reduces soil temperature, while it improves the retention of soil moisture by decreasing evapotranspiration [31].Lower soil temperature subsequently limits the decomposition of fresh litter, causing organic carbon to accumulate in the soil [32].When considering these ecological functions of forest floor bryophytes, it is important to evaluate their water storage capacity to reveal the influence of climate warming on the hydrological processes of forest ecosystems. In this study, the water storage capacity of bryophytes, including forest floor species, was evaluated in the montane forests of Japan, where high bryophyte diversity is harbored [33,34].For this purpose, this study focused on the altitudinal patterns of bryophytes because altitude is one of the major factors that determine the behavior of bryophytes in montane forest ecosystems [35,36].The results of this study will advance our understanding of how bryophytes contribute to the hydrological processes in forest ecosystems, which may implicate changes in their water storage function in response to climate warming. Study Site The study site is located on the Yatsugatake Mountains, in central Japan (Figure 1).These mountains stretch ca.30 km from north to south and 15 km from east to west.The highest peak is Mt.Akadake (2899 m).The vegetation is roughly grouped into four types; temperate broadleaved forests (below ca.1800 m), subalpine conifer forests (ca.1800-2600 m), stone pine forests (ca.2600-2800 m), and alpine meadows (ca.2800-2900 m).Annual mean temperature and precipitation from 1981 to 2010, measured at the closest weather station (Nobeyama; 1350 m alt.), were 6.9 • C and 1439 mm, respectively [37].The highest temperatures are recorded in August (19.2• C) and the lowest are in January (−5.3 • C) [38].Precipitation also changes seasonally; being highest in September (210.5 mm/month) and lowest in December (38.4 mm/month) [38]. Forests 2018, 9, x FOR PEER REVIEW 3 of 14 respectively [37].The highest temperatures are recorded in August (19.2°C) and the lowest are in January (−5.3 °C) [38].Precipitation also changes seasonally; being highest in September (210.5 mm/month) and lowest in December (38.4 mm/month) [38].The right-hand panel was created using global information system data provided by the Ministry of Land and Geospatial Information Authority of Japan. Sampling and Water Storage Capacity Twelve 10 m × 10 m study plots were selected at 200-m altitude intervals from 1800 to 2800 m along two trails (eastern trail; E trail, western trail; W trail) extending from the east to the west side of Mt.Yatsugatake (Figure 1).The plot of the E trail at 1800 m belongs to temperate-subalpine mixed forests, while the E trail extending from ca. 2000 m to 2600 m and the W trail extending from ca. 1800-2400 m altitude are classified as subalpine forests.Other plots at higher altitudes belong to stone pine forests or alpine meadows.In these study plots, three to four samples of dominant bryophyte communities were collected from each substrate [soil (including humus), rock, logs, and tree trunks] using sampling quadrats (10 cm × 10 cm).When dominant bryophyte communities consisted of more than one species, the ratios of each species in the collected samples were adjusted to those observed in the field.When the largest bryophyte cover was smaller than 100 cm × 100 cm on each substrate, this substrate was not included in the sampling.The life forms of sampled species were also recorded according to the life form classification of Bates [40].To estimate maximum water storage capacity (WSC) of bryophytes under field conditions, sampling was completed during August 2015, when no rain occurred for five consecutive days during the summer.Substrate and the bryophyte cover were measured in these plots by 10% increments; however, when the cover was less than 10%, they were recorded by 5% increments.Percentage values of these covers were then transformed to m 2 values. Collected samples were placed in sealed plastic bags to keep their community structure as intact as possible and were transported to the laboratory.The soils, litter, and other small mixed species were cleaned from the samples, and these samples were weighed (fresh weight; Fw).After weighing, the samples were dipped in a water container to represent the state of bryophytes when fully saturated by heavy rainfall.Samples were taken and then placed for 10 min to remove the extra external water, which was not tightly connected with the shoots.Then, these samples were weighed again (saturated weight; Sw).After these procedures, bryophyte samples were oven-dried at 80 °C 1 of Oishi [39].The right-hand panel was created using global information system data provided by the Ministry of Land and Geospatial Information Authority of Japan. Sampling and Water Storage Capacity Twelve 10 m × 10 m study plots were selected at 200-m altitude intervals from 1800 to 2800 m along two trails (eastern trail; E trail, western trail; W trail) extending from the east to the west side of Mt.Yatsugatake (Figure 1).The plot of the E trail at 1800 m belongs to temperate-subalpine mixed forests, while the E trail extending from ca. 2000 m to 2600 m and the W trail extending from ca. 1800-2400 m altitude are classified as subalpine forests.Other plots at higher altitudes belong to stone pine forests or alpine meadows.In these study plots, three to four samples of dominant bryophyte communities were collected from each substrate [soil (including humus), rock, logs, and tree trunks] using sampling quadrats (10 cm × 10 cm).When dominant bryophyte communities consisted of more than one species, the ratios of each species in the collected samples were adjusted to those observed in the field.When the largest bryophyte cover was smaller than 100 cm × 100 cm on each substrate, this substrate was not included in the sampling.The life forms of sampled species were also recorded according to the life form classification of Bates [40].To estimate maximum water storage capacity (WSC) of bryophytes under field conditions, sampling was completed during August 2015, when no rain occurred for five consecutive days during the summer.Substrate and the bryophyte cover were measured in these plots by 10% increments; however, when the cover was less than 10%, they were recorded by 5% increments.Percentage values of these covers were then transformed to m 2 values. Collected samples were placed in sealed plastic bags to keep their community structure as intact as possible and were transported to the laboratory.The soils, litter, and other small mixed species were cleaned from the samples, and these samples were weighed (fresh weight; Fw).After weighing, the samples were dipped in a water container to represent the state of bryophytes when fully saturated by heavy rainfall.Samples were taken and then placed for 10 min to remove the extra external water, which was not tightly connected with the shoots.Then, these samples were weighed again (saturated weight; Sw).After these procedures, bryophyte samples were oven-dried at 80 • C for 48 h, and then weighed again (dry weight; Dw).Using Fw and Dw values, two types of water storage capacity (WSC) of bryophytes were calculated: WSC of fresh samples (WSC f ) and WSC of oven-dried samples (WSC d ).WSC f represents the possible maximum amount of water absorbed by bryophytes under field conditions.However, dried samples are often used to estimate bryophyte WSC [9,22,25,27,28]; therefore, WSC d was also used for comparison with other studies. To examine the influence of bryophytes on soil moisture, three soil samples were collected using a soil core sampler (100 cm 3 ) in each plot.The collected soil samples were preserved in sealed plastic bags and were weighed in the laboratory.After oven-drying at 80 • C for 48 h, these oven dried samples were weighed again.The differences in weight before and after the soil samples were dried were used as the soil moisture (g/100 cm 3 ). Forests 2018, 9, x FOR PEER REVIEW 4 of 14 for 48 h, and then weighed again (dry weight; Dw).Using Fw and Dw values, two types of water storage capacity (WSC) of bryophytes were calculated: WSC of fresh samples (WSCf) and WSC of oven-dried samples (WSCd).WSCf represents the possible maximum amount of water absorbed by bryophytes under field conditions.However, dried samples are often used to estimate bryophyte WSC [9,22,25,27,28]; therefore, WSCd was also used for comparison with other studies. To examine the influence of bryophytes on soil moisture, three soil samples were collected using a soil core sampler (100 cm 3 ) in each plot.The collected soil samples were preserved in sealed plastic bags and were weighed in the laboratory.After oven-drying at 80 °C for 48 h, these oven dried samples were weighed again.The differences in weight before and after the soil samples were dried were used as the soil moisture (g/100 cm 3 ). Water Storage Capacity at Quadrat, Substrate, and Plot Scales Bryophyte WSC was assessed hierarchically at three scales: quadrat, substrate, and plot scales (Figure 2).The values of WSC f /WSC d at sampling quadrats (WSC f -quadrat/WSC d -quadrat; g/100 cm 2 ) were calculated by subtracting Fw/Dw from Sw as follows: Using these values, the total WSC f /WSC d of bryophytes on each substrate within a plot (WSC f -substrate/WSC d -substrate) was estimated according to the following equations: where, K = substrate types (soil, rocks, logs, and tree trunk) and Cov (K) = total bryophyte cover on substrate (K).Then, total WSC f /WSC d within each plot (WSC f -plot/WSC d -plot; L/100 m 2 ) was evaluated using the values of WSC f -substrate/WSC d -substrate as follows: where, k means substrate type (k = 1; soil, k = 2; rocks, k = 3; logs, k = 4; tree trunk). Modeling The difference in bryophyte WSC-quadrat between substrate types and between life form types was examined by t-test or Tukey's multiple comparison test.The influence of forest floor bryophytes on soil moisture was then examined by Pearson product-moment correlation between the WSC-quadrat values of bryophytes on soil and soil moisture.As the texture of soils largely differed between alpine and below alpine areas, the alpine data were not included in the calculation. At substrate scales, the values of bryophyte WSC can be influenced by environmental (e.g., substrate type and cover) and ecological factors (e.g., types of bryophyte community).To reveal these influences, linear models were used that correlated the values of WSC f -substrate/WSC dsubstrate with these variables.This modeling was performed for fresh and dried bryophyte samples, respectively.The environmental and ecological variables used in the modeling were substrate type, cover of each substrate within a plot (m 2 ), Fw/Dw, WSC f -quadrat/WSC d -quadrat, total bryophyte cover on each substrate (m 2 ), and water uptake strategies of the dominant bryophyte communities (ectohydric, endohydric, and mixed types).The types of substrate and water uptake strategies were adopted as categorical variables.The best-fit models were identified using the step Akaike information criterion (AIC) function.For these models, only variables that were significantly correlated with WSC f -substrate/WSC d -substrate were used.In addition to these models, linear mixed models were constructed with the E/W trails as nested variables to reflect the influence of each trail on bryophyte WSC.The best-fit models were selected using the same procedures as for the linear models.All calculations were performed with R software [41]. Comparison of Water Storage Capacity of Bryophyte Communities In the 12 study plots, bryophyte cover larger than 100 cm × 100 cm was only found on the soil and on logs; hence, the bryophyte communities on these substrates were sampled.In total, 62 samples were collected from these plots and 16 bryophyte communities consisting of 11 bryophyte species were recorded (Table 1).These species included Codriophorus fasicularis (Hedw.(Rig rob).Among the 16 bryophyte communities, several small liverworts (e.g., Cephaloziea sp.) were often found; however, their influence on the WSC of these bryophyte communities was not considered because they represented low biomass and a small number of shoots.The life form types recorded in the samples were tall turfs (T), large cushions (Cu), smooth mats (Sm), rough mats (Rm), thread-like forms (Tl), and wefts (W). The bryophyte communities were grouped into three types by the number of species.Bryophyte communities consisted of one species (e.g., Cor fas), two species (e.g., Dic maj-Hyl spl), or three species (e.g., Dic maj-Het aff-Ple shr).Among the collected species, Hyl spl and Ple shr had the highest occurrence in these dominant communities (Hyl spl; 11 times, Ple shr; 11 times), ranging from 1800 to 2600 m altitude.The average ± standard deviation (SD) of Fw, Dw, and Sw were 13.78 ± 7.91, 4.98 ± 2.86, and 35.16 ± 15.46 g/100 cm 2 , respectively.Using these values, WSC f -quadrat/WSC d -quadrat was calculated.The values of WSC f -quadrat were 19.72 ± 9.10 g/100 cm 2 for the soil and 23.69 ± 11.94 g/100 cm 2 for logs.The higher WSC f -quadrat values were measured in Pog jap, Cor fac, Hyl spl-Ple shr, and Rig rob-Ple shr communities (average values; >30 g/100 cm 2 ), while Hyl spl-Ple shr communities also had larger variation in these values, ranging from 12.21 to 36.00 g/100 cm 2 on average, across substrates (Table 1).The values of the WSC d -quadrat demonstrated similar trends to those of the WSC f -quadrat and were significantly correlated with those of the WSC f -quadrat (r = 0.915, n = 62, p < 0.01).The values of the WSC d -quadrat were 29.80 ± 13.11 g/100 cm 2 for the soil and were 30.72 ± 13.11 g/100 cm 2 for logs.To examine the influence of life forms on bryophyte WSC, the WSC of each life form type was calculated (Table 2).In fresh samples, higher WSC was measured in Rm-W, followed by Cu, Sm-W, and W forms.In contrast, Tl-W, T-Sm-W, and T-W forms had lower values.The dry samples showed similar results.Using multiple comparison, significant differences were found between Rm-W and Tl-W in fresh samples and between Cu and Tl-W in dry samples (p < 0.05).Abbreviations are as follows: T; tall turfs, Cu; large cushions, W; Wefts, Rm; rough mats, Sm; smooth mats, Tl; thread-like-forms, n; number of quadrats, WSC f -q; water storage capacity of fresh samples within a quadrat, WSC d -q; water storage capacity of dried samples within a quadrat, SD; standard deviation, Significance; different letters show significant differences by multiple comparisons (p < 0.05). Water Storage Capacity of Bryophytes and Soil Moisture The influence of bryophyte WSC on soil moisture was examined using the data except for alpine areas.The values of soil moisture were 13.82 ± 5.27 g/100 cm 3 (Average ± SD).These soil moistures were significantly and positively correlated with WSC d -quadrat (n = 9, r = 0.778, p = 0.014) and were strongly, but not significantly, correlated with WSC f -quadrat (n = 9, r = 0.613, p = 0.079). Altitudinal Patterns of Water Storage Capacity of Bryophyte Communities at the Quadrat Scale The values of WSC f -quadrat/WSC d -quadrat were compared in relation to substrate type and altitude.The differences in WSC f -quadrat between the soil and logs were not statistically significant (WSC f -quadrat; t-values = 1.486, df = 60, p = 0.143/WSC d -quadrat; t-values = 0.270, df = 60, p = 0.788).Furthermore, the changes in the WSC f -quadrat/WSC d -quadrat with altitude were also not significant for the soil or logs (Figure 3).The Pearson correlation coefficient between WSC f -quadrat/WSC d -quadrat and altitude on soil was 0.118 (n = 62, p = 0.43)/0.227(n = 62, p = 0.18), while that on logs was 0.209 (n = 62, p = 0.31)/0.193(n = 62, p = 0.35). Water Storage Capacity of Bryophyte Communities on Each Substrate Bryophyte cover was recorded on each substrate to calculate WSC f -substrate/WSC d -substrate.Bryophyte cover differed greatly among study plots.The average and SD of total bryophyte cover within each plot was 24.70 ± 18.86 m 2 /100 m 2 .The cover showed the highest values in subalpine forests for both E and W trails and on the soil and logs (Table 1).The E trail had the highest values at 2400 m on soil and at 2200 m on logs, whereas the W trail had the highest value at 1800 m on soil and at 2200 m on logs.Then, the values of WSC f -substrate/WSC d -substrate were calculated according to Equation (3) and ( 4) for each substrate.The resulting values of WSC f -substrate on soil were 60.02 ± 46.86 (L/100 m 2 ), while those on logs were 26.63 ± 31.34 (L/100 m 2 ).Using these values, linear models for the WSC f -substrate/WSC d -substrate were constructed based on the environmental and ecological variables.Among these variables, only bryophyte cover was significantly correlated with the values of the WSC f -plot/WSC d -plot.Hence, this variable was adopted as an explanatory variable for the linear models.The constructed models for both WSC f -substrate (Y = 2.101X − 1.395, R 2 = 0.856) and WSC d -substrate (Y = 3.088X − 2.167, R 2 = 0.866) fitted well.In comparison, no significant linier mixed models were constructed for each variable. Water Storage Capacity of Bryophyte Communities per Plot The values of the WSCf-plot/WSCd-plot were calculated based on Equations ( 5) and ( 6) (Table 3, Figure 4).The values of the WSCf-plot (Average ± SD) were 86.71 ± 35.90 L/100 m 2 (equivalent to the increase of 0.8671 ± 0.3590 mm of rainfall interception), while those of the WSCd-plot were 123.51 ± 52.59 L/100 m 2 (=1.2351 ± 0.5259 mm).Both WSCf-plot/WSCd-plot of E trail and W trail had the highest values in subalpine forests; however, the altitude of the plots with the highest WSC values differed for fresh and dry samples.The WSCf-plot along the E trail had the highest values at 2200 m (115.51L/100 m 2 ), while the highest value of the WSCd-plot was observed at 2600 m (153.81L/100 m 2 ).The values of the WSCf-plot along the W trail were highest at 1800 m (179.45L/100 m 2 ), whereas the WSCdplot values were highest at 2200 m (253.60 L/100 m 2 ). Water Storage Capacity of Bryophyte Communities per Plot The values of the WSC f -plot/WSC d -plot were calculated based on Equations ( 5) and ( 6) (Table 3, Figure 4).The values of the WSC f -plot (Average ± SD) were 86.71 ± 35.90 L/100 m 2 (equivalent to the increase of 0.8671 ± 0.3590 mm of rainfall interception), while those of the WSC d -plot were 123.51 ± 52.59 L/100 m 2 (=1.2351 ± 0.5259 mm).Both WSC f -plot/WSC d -plot of E trail and W trail had the highest values in subalpine forests; however, the altitude of the plots with the highest WSC values differed for fresh and dry samples.The WSC f -plot along the E trail had the highest values at 2200 m (115.51L/100 m 2 ), while the highest value of the WSC d -plot was observed at 2600 m (153.81L/100 m 2 ).The values of the WSC f -plot along the W trail were highest at 1800 m (179.45L/100 m 2 ), whereas the WSC d -plot values were highest at 2200 m (253.60 L/100 m 2 ). Discussion The WSC quadrat of bryophytes was influenced by the life form types and their interactions, further affecting soil moisture; however, it did not vary with altitude or exhibit significant differences between the substrate types.At the plot scale, the WSC plot significantly correlated with bryophyte cover, with the highest in subalpine forests. Water Storage Capacity at the Quadrat Scale Comparison of the WSCf-quadrat/WSCd-quadrat showed higher values in Hyl spl-Ple shr, Ple shr-Rig rob, Pog jap, and Cor fac communities (Table 1).These results are explained by the community structure of bryophytes.Bryophytes forming compact mats had higher WSC because the spaces between individual shoots retain additional external water [29].In the plots of the present study, these compact mats were formed by weft-forming mosses (W form; Hyl spl and Ple shr) and large cushion moss (Cu form; Cor fas), which contributed to the higher values of the WSCf-quadrat/WSCdquadrat.In comparison, the community of Pog jap forms tall turfs (T form) that physically increases the amount of water held in these communities.Regarding the Hyl spl-Ple shr community, this moss community had large differences in the WSCf-quadrat/WSCd-quadrat, ranging from 12.21-36.00/15.49-43.52g/100 cm 2 .These results are attributed to difference in compactness or shoot density, which are reflected by the wider range of their FW/DW values (6.20-17.40/2.91-6.32g/100 cm 2 ).Some combinations of life forms (T-W and T-Sm-W forms) had lower WSC values on average than those of less mixed or single forms (T or W and T or Sm-W forms), despite there being no differences among the species in these communities (Table 2).The decreased WSC of these mixed life forms could be explained by their poor ability to form tight communities with neighboring species, due to differences in the characteristics of the life forms.The upright T form is largely different from that of the creeping Sm and W forms in their morphology.Moreover, the T form species include endohydric bryophytes (Polytrichaceae; P. controtum and P. japonium), which develop internal water conducting tissues and mainly absorb water from substrates [42].In contrast, the Sm and W forms are ectohydric species without such conducting tissues, and exclusively rely on external capillary water [42].These results are supported by an experiment that showed a reduced WSC in the mixture of bryophytes with different life forms and water uptake system [43]. In comparison, the results of this study demonstrated no statistical differences in WSCfquadrat/WSCd-quadrat values between substrates, nor any change in these values with altitude (Figure 3).This is because logs largely covered by bryophytes were almost decayed and the surface Discussion The WSC quadrat of bryophytes was influenced by the life form types and their interactions, further affecting soil moisture; however, it did not vary with altitude or exhibit significant differences between the substrate types.At the plot scale, the WSC plot significantly correlated with bryophyte cover, with the highest in subalpine forests. Water Storage Capacity at the Quadrat Scale Comparison of the WSC f -quadrat/WSC d -quadrat showed higher values in Hyl spl-Ple shr, Ple shr-Rig rob, Pog jap, and Cor fac communities (Table 1).These results are explained by the community structure of bryophytes.Bryophytes forming compact mats had higher WSC because the spaces between individual shoots retain additional external water [29].In the plots of the present study, these compact mats were formed by weft-forming mosses (W form; Hyl spl and Ple shr) and large cushion moss (Cu form; Cor fas), which contributed to the higher values of the WSC f -quadrat/WSC d -quadrat.In comparison, the community of Pog jap forms tall turfs (T form) that physically increases the amount of water held in these communities.Regarding the Hyl spl-Ple shr community, this moss community had large differences in the WSC f -quadrat/WSC d -quadrat, ranging from 12.21-36.00/15.49-43.52g/100 cm 2 .These results are attributed to difference in compactness or shoot density, which are reflected by the wider range of their FW/DW values (6.20-17.40/2.91-6.32g/100 cm 2 ).Some combinations of life forms (T-W and T-Sm-W forms) had lower WSC values on average than those of less mixed or single forms (T or W and T or Sm-W forms), despite there being no differences among the species in these communities (Table 2).The decreased WSC of these mixed life forms could be explained by their poor ability to form tight communities with neighboring species, due to differences in the characteristics of the life forms.The upright T form is largely different from that of the creeping Sm and W forms in their morphology.Moreover, the T form species include endohydric bryophytes (Polytrichaceae; P. controtum and P. japonium), which develop internal water conducting tissues and mainly absorb water from substrates [42].In contrast, the Sm and W forms are ectohydric species without such conducting tissues, and exclusively rely on external capillary water [42].These results are supported by an experiment that showed a reduced WSC in the mixture of bryophytes with different life forms and water uptake system [43]. In comparison, the results of this study demonstrated no statistical differences in WSC f -quadrat/WSC d -quadrat values between substrates, nor any change in these values with altitude (Figure 3).This is because logs largely covered by bryophytes were almost decayed and the surface material was similar to that of humus soil.These similarities in substrate surface could result in the development of similar bryophyte communities between logs and the soil.Furthermore, the differences of dominant bryophytes along the altitudinal gradient were less clear at the study sites, as several species occurred at a wider altitudinal range (e.g., Hyl spl; 1800-2600 m, and Ple shr; 1800-2600 m), which could reduce the magnitude of change to the WSC-quadrat with altitude. Influence of Forest Floor Bryophytes on Below Ground Processes The values of WSC f -quadrat/WSC d -quadrat were positively correlated with soil moisture.An increase in soil moisture by bryophyte cover has been reported because the evapotranspiration rate of bryophytes is lower than that of grasses [31] and they are able to retain a large amount of water during wet periods [44,45].In addition, this study suggests that bryophytes with higher WSC have a larger influence on the increase in soil moisture.This influence could be related to the transport of larger amounts of moisture from bryophytes with higher WSC to the soil surface during evapotranspiration processes.Besides, bryophytes with higher WSC might further reduce water evaporation from the soil surface, as these bryophytes often retained water for longer periods [29]. Water Storage Capacity at the Substrate/Plot Scale The WSC values of bryophytes were affected by biomass (plant tissues mass), species type, and growth form [29,43]; however, the constructed liner models revealed that the values of the WSC f -substrate/WSC d -substrate were largely dependent on total bryophyte cover on each substrate, regardless of the type of species and their substrates.These results are explained by the large differences in bryophyte cover among the study plots (24.70 ± 18.86 m 2 /100 m 2 ), which decrease the relative influence of other factors (e.g., bryophyte community type) on biomass and makes cover a useful substitute for biomass and the associated WSC-plot. Due to this strong significant correlation in bryophyte cover with biomass, the altitudinal patterns of the WSC f -plot/WSC d -plot closely fitted a negative quadratic curve (E trail) or a linear regression (W trail), with the highest values in subalpine forests where the highest bryophyte cover was recorded (Figure 4).In general, bryophyte cover on the forest floor changes with altitude.Subalpine conifer forests had higher cover due to the favorable environment for bryophyte growth, such as low temperature and high occurrence of fog, and less influence from fallen leaves [46].In contrast, bryophyte cover tends to decline in temperate broadleaved forests because fallen leaves shade bryophytes on the forest floor and inhibit their photosynthesis [35].A decline in bryophyte cover has also been reported in alpine zones due to the lack of forest canopies to provide suitable habitats for bryophytes [36]. Interestingly, the plots with the highest WSC-plot values differed between fresh (WSC f -plot) and dried samples (WSC d -plot) on both E and W trails (Table 3).These differences were attributed to larger differences between WSC f -quadrat and WSC d -quadrat in endohydric species (Polytrichaceae sp.).Despite the bryophyte samples being collected during the dry period (no rainfall), endohydric species still had higher water retention status because they absorb water from the soil; whereas the water content of ectohydric species was severely reduced.This retained water was completely lost during the oven-drying process, which increased the amount of water absorbed by the dry samples of endohydric species compared to ectohydric bryophytes.These differences in WSC between fresh and dry samples should be carefully considered when one estimates the WSC of bryophytes under field conditions, as the estimated WSC of endohydric bryophytes might be relatively higher than that of ectohydric species if dried samples are used for the calculation. Due to the higher cover by bryophytes in subalpine forests, the estimated WSC d -plot had a maximum of 2.5 mm extra rainfall interception (Table 3), which was almost equivalent to the values reported for the WSC of epiphytic bryophytes on trees in montane cloud forests (3.0-3.5 mm) [23,25].These results underline the importance of forest floor bryophytes for the overall hydrological processes of subalpine forests.Furthermore, considering the influence of snowmelt and cloud water deposition on these bryophytes, their contribution to forest hydrology could be more significant than expected from their interception of additional rainfall alone.Snow pack is a key factor that determines the water dynamics in subalpine forests [47].After thawing of snow, forest floor bryophytes absorb snowmelt and affect forest soil hydrology by increasing soil infiltration [48].Like snow packs, cloud water deposition is an important water supply for forests at higher altitude, due to the frequent occurrence of fog [49].Therefore, forest floor bryophytes, especially ectohydric species that largely rely on atmospheric water, might contribute to the water cycle in this ecosystem through the interception of fog and dew. Changes to Water Storage Capacity by Climatic Change Regarding the forest water storage for which bryophytes are responsible, the influence of climate warming might be more serious in subalpine forests where bryophyte WSC showed highest values (Figure 4).Given that global environmental changes seem to affect ecosystems more strongly at the community level than at the individual species level [50], the structure of bryophyte communities is more strongly influenced by climate warming than the species level [43].For example, severe drought stress caused by climate warming [51] might facilitate the dominance of endohydric T form species (e.g., Pog jap) over ectohydric W form species (e.g., Hyl spl); because these T form species can be less affected by drought stress owing to their capacity to absorb water from soil.However, as this study revealed, these changes in the structure of bryophyte communities (i.e., dominant life form types and their interactions) influence the WSC, which also affects the soil moisture that determines soil carbon and nitrogen cycling [52,53].Hence, climate warming might strongly affect bryophyte WSC controlled by life forms, further causing changes to the biological activity and nutrient cycling of soil in forest ecosystems. Conclusions The bryophyte WSC-plot changed with altitude and was highest in subalpine forests.This altitudinal pattern was explained by bryophyte cover, which could be used as a substitute for bryophyte biomass and its associated WSC.At the quadrat scale, the WCS of bryophytes was related to life form type and their interactions.The WSC further had a positive impact on soil moisture important for soil biological activities.Of importance, bryophyte WSC controlled by the life forms might be strongly affected by climate warming.Thus, changes to the dominance of bryophyte life forms might serve as an early indicator of how bryophyte WCS and associated biological activities are influenced by climate warming. Figure 1 . Figure 1.Study site, Mt.Yatsugatake, central Japan.The location of Mt.Yatsugatake is shown by the red circles in the slide on the left.Study plots were established along the E and W trails at 200-m altitude intervals, from 1800 to 2800 m.The left-hand panel was adapted from Figure1of Oishi[39].The right-hand panel was created using global information system data provided by the Ministry of Land and Geospatial Information Authority of Japan. Figure 1 . Figure 1.Study site, Mt.Yatsugatake, central Japan.The location of Mt.Yatsugatake is shown by the red circles in the slide on the left.Study plots were established along the E and W trails at 200-m altitude intervals, from 1800 to 2800 m.The left-hand panel was adapted from Figure1of Oishi[39].The right-hand panel was created using global information system data provided by the Ministry of Land and Geospatial Information Authority of Japan. Figure 2 . Figure 2. Schematic showing the hieratical evaluation of the water storage capacity of bryophytes. Figure 2 . Figure 2. Schematic showing the hieratical evaluation of the water storage capacity of bryophytes. Figure 3 . Figure 3. Changes to the water storage capacity of bryophyte communities at the quadrat scale with respect to altitude: (a) fresh samples and (b) dry samples; the error bar indicates standard deviation. Figure 3 . Figure 3. Changes to the water storage capacity of bryophyte communities at the quadrat scale with respect to altitude: (a) fresh samples and (b) dry samples; the error bar indicates standard deviation. Figure 4 . Figure 4. Change to water storage capacity of bryophyte communities within each plot along an altitudinal gradient.(a) Fresh samples, (b) Dry samples. Figure 4 . Figure 4. Change to water storage capacity of bryophyte communities within each plot along an altitudinal gradient.(a) Fresh samples, (b) Dry samples. Table 1 . Water storage capacity of dominant bryophyte communities at the quadrat scale (100 cm 2 ), and their total cover on each substrate within the study plots. Table 2 . Water storage capacity of bryophyte communities at the quadrat scale (100 cm 2 ) among life form types. f -plot; water storage capacity of fresh samples in a plot, WSC d -plot; water storage capacity of dried samples in a plot, SD; standard deviation.
2019-04-03T13:06:12.243Z
2018-07-18T00:00:00.000
{ "year": 2018, "sha1": "fb7b4f7ab27463183968af97187357008e5334e2", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1999-4907/9/7/433/pdf?version=1531926940", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "fb7b4f7ab27463183968af97187357008e5334e2", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
96563763
pes2o/s2orc
v3-fos-license
Dihydropyranoflavones from Pongamia pinnata Das cascas do caule de Pongamia pinnata foram isolados dois novos compostos, 3-metoxi(3′′,4′′-diidro3′′-hidroxi-4′′-acetoxi)-2′′,2′′-dimetilpirano-(7,8:5′′,6′′)-flavona e 3-metoxi-(3′′,4′′diidro-4′′hidroxi-3′′-acetoxi)-2′′,2′′-dimetilpirano-(7,8:5′′,6′′)-flavona, juntamente com seis compostos conhecidos, óxido de cariofileno, obovatachalcona, 8-hidroxi-6-metoxi-3-pentil-1Hisocromeno-1-ona, 6,7,2,2-dimetilcromono-8,γ,γ-dimetilalilflavonona, isolonchocarpin, ovaliflavanona A. Suas estruturas foram determinadas a partir da interpretação de dados espectroscópicos. Introduction Pongamia pinnata (Linn) Pierre (Leguminosae, Papilionaceac; synonym, Pongamia glabra Vent), the only one species of genus Pongamia, is a medium sized glabrous tree, grows in the littoral regions of South Eastern Asia and Australia.All parts of the plant have been used as crude drug for the treatment of tumors, piles, skin diseases, wounds and ulcers. 1 Extracts of the plant possess significant anti-diarrhoeal, anti-fungal, anti-plasmodial, antiulcerogenic, anti-inflammatory, and analgesic activities. 2 Previous phytochemical investigation of this plant indicated the presence of abounding prenylated flavonoids such as furanoflavones, furanoflavonols, chromenoflavones, furanochalcones, and pyranochalcones. 3In this paper, we reported the isolation and identification of two new flavones (1, 2) from the stem bark of Pongamia pinnata. Compound 2, a white powder, gave a molecular ion [M + ] at m/z 410.13665 in the HREIMS, corresponding to the molecular formula C 23 H 22 O 7 (calc.410.13655).The NMR spectral data of 2 (Table 1) were closely comparable to those of compound 1, with the only difference being due to the position of the acetoxy group (δ H 2.13, 3H, s, δ C 170.5, δ C 21.3, OAc-3″) and hydroxyl group on the pyran ring.In HMBC spectra, the observed correlations from protons of Me 1 -2″ (δ H 1.41, 3H, s), Me 2 -2″ (δ H 1.35, 3H, s) to C-3″ (δ C 72.9), and from H-3″ (δ H 5.06, 1H, d, J 4.8 Hz) to the carbonyl of acetoxyl group suggested the acetoxy group to be located on C-3″ position.Further analysis of the NMR and MS spectra of compound 2 indicated the presence of the hydroxyl group at C-4″ position.The coupling constant J 3″, 4″ was 4.8 Hz and the chemical shift difference of gem-dimethyl signals was 0.06 ppm in 1 H NMR spectrum.This evidence suggested that the relative configuration of compound 2 was cis-form. General Optical rotation were measured with a Jasco 1020 polarimeter.NMR spectra were obtained on a Bruker AVANCE 500 spectrometer (500 MHz for 1 H NMR, 125 MHz for 13 C NMR).EIMS and HREIMS spectra were recorded on a Finnigan MAT TSQ 700 mass spectrometer.UV spectra were obtained in a Beckman DU-640 UV spectrophotometer.A Waters Nova-pack HR C18 column (19×300mm) was used for semipreparative HPLC, along with Waters 600E Multisolvent Delivery System and a Waters 996 Photodiode Array Detector. Plant material The material investigated were stem bark of Pongamia pinnata collected in October 2002 from Hainan Province, southern China.The material was identified by Professor Si Zhang, Guangdong Key Laboratory of Marine Materia Medica, South China Sea Institute of Oceanology, Chinese Academy of Sciences.A voucher specimen is deposited at the herbarium of the South China Sea Institute of Oceanology (No. GKLMMM005). Table 1 .Table 1 . 1H,13C and selected HMBC NMR data for compounds 1 and 2 a a spectra recorded in DMSO-d 6 (500 MHz for 1 H, 125 MHz for 13 C); TMS was used as internal standard.Yin et al.J. Braz.Chem.Soc.
2019-04-06T13:10:38.882Z
2006-12-01T00:00:00.000
{ "year": 2006, "sha1": "d4d497fcc4d9c5b421a2a62fc8722a8c23a1edb2", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/jbchs/a/6yWccJWt5Z9wSthdD8hsz4H/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9a349956b62c97fd0a85331162c95a649dcef2c1", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Mathematics" ] }
260258877
pes2o/s2orc
v3-fos-license
An Efficient Ensemble Approach for Alzheimer’s Disease Detection Using an Adaptive Synthetic Technique and Deep Learning Alzheimer’s disease is an incurable neurological disorder that leads to a gradual decline in cognitive abilities, but early detection can significantly mitigate symptoms. The automatic diagnosis of Alzheimer’s disease is more important due to the shortage of expert medical staff, because it reduces the burden on medical staff and enhances the results of diagnosis. A detailed analysis of specific brain disorder tissues is required to accurately diagnose the disease via segmented magnetic resonance imaging (MRI). Several studies have used the traditional machine-learning approaches to diagnose the disease from MRI, but manual extracted features are more complex, time-consuming, and require a huge amount of involvement from expert medical staff. The traditional approach does not provide an accurate diagnosis. Deep learning has automatic extraction features and optimizes the training process. The Magnetic Resonance Imaging (MRI) Alzheimer’s disease dataset consists of four classes: mild demented (896 images), moderate demented (64 images), non-demented (3200 images), and very mild demented (2240 images). The dataset is highly imbalanced. Therefore, we used the adaptive synthetic oversampling technique to address this issue. After applying this technique, the dataset was balanced. The ensemble of VGG16 and EfficientNet was used to detect Alzheimer’s disease on both imbalanced and balanced datasets to validate the performance of the models. The proposed method combined the predictions of multiple models to make an ensemble model that learned complex and nuanced patterns from the data. The input and output of both models were concatenated to make an ensemble model and then added to other layers to make a more robust model. In this study, we proposed an ensemble of EfficientNet-B2 and VGG-16 to diagnose the disease at an early stage with the highest accuracy. Experiments were performed on two publicly available datasets. The experimental results showed that the proposed method achieved 97.35% accuracy and 99.64% AUC for multiclass datasets and 97.09% accuracy and 99.59% AUC for binary-class datasets. We evaluated that the proposed method was extremely efficient and provided superior performance on both datasets as compared to previous methods. Introduction Alzheimer's disease (AD) is an incurable neurological disorder that leads to a gradual decline in cognitive abilities, but early detection can significantly mitigate symptoms [1]. Patients with AD lose their cognitive abilities, making it difficult to carry on with normal responsibilities and perform daily routine task; thus, they become dependent on their family for small tasks and survival. AD causes problems of memory loss like remembering things, arranging and recollecting things, intuition, and judgmental issues [2]. Around 2% of people at the age of 65 are affected with AD and 35% at the age of 85 years. It was reported that 26.6 million people were affected in the year 2006, and the count is increasing dramatically [3]. In 2020, more than 55 million people were affected by AD, and the count is estimated to reach 152 million by 2050 [4]. The degradation of brain cells and the dysfunction of synaptic and pathological changes start to develop almost 20 years before AD diagnosis [5]. A proper diagnosis of the disease is also needed to develop the necessary drugs to slow down the progression process, and the patient's whole medical history is thoroughly examined for the effective monitoring of the disease. The overall cost and effort faced by patients and families are also increasing dramatically. Researchers have emphasized the importance of the early detection of AD for starting treatment promptly and obtaining accurate results. Individuals with AD typically exhibit a reduction in brain tissue volume in the hippocampus and cerebral cortex, accompanied by an expansion of the ventricles in the brain, as observed in multiple studies. In advanced stages of the disease, brain scans such as MRI images show a substantial reduction in the hippocampus and cerebral cortex, along with ventricular expansion [6]. AD primarily affects the regions of the brain and the intricate network of brain tissues involved in cognition, memory, decision making, and planning. The diffusion of brain tissues in the affected areas causes a decrease in the MRI image intensities in both the magnetic resonance imaging (MRI) and functional magnetic resonance imaging (fMRI) techniques [7][8][9]. In recent years, there has been a growing trend of using neuroimaging data and machine learning (ML) methods to characterize AD, providing a potential means for personalized diagnosis and prognosis [10][11][12]. Currently, deep learning (DL) has emerged as a powerful methodology in the diagnostic imaging field, as evidenced by several recent studies [13][14][15][16][17]. Diagnosing AD using DL is still a significant challenge for researchers [18]. Medical images are scarce and of lower quality, and the difficulty in identifying regions of interest (ROI) within the brain and unbalanced classes are issues encountered in detecting AD. Among the various DL architectures, the convolutional neural network has received considerable interest due to its extraordinary effectiveness in classification [19]. In contrast to conventional machine learning, deep learning enables automatic feature extraction like low-level to high-level latent representations. Therefore, deep learning requires minimal image pre-processing steps and little prior understanding of the synthesis process [20]. The imbalanced datasets for medical disease detection are the most significant challenge. The number of samples in each class is not equal for Alzheimer's disease, despite the availability of a balanced dataset. The model's performance is biased, and generalizations become difficult with imbalanced datasets. Individual deep learning models handle basic data efficiently, but overfitting occurs when dealing with complex problems. The generalizability, efficacy, and reliability of this type of model are poor. Individual deep learning models make predictions or detections based on learning with a single set of weights and do not capture nuances from all image features. To accurately diagnose a disease using segmented magnetic resonance imaging, it is necessary to conduct an in-depth examination of the disease-specific tissues. Several studies have used conventional machine-learning approaches to diagnose diseases from MRI, but manually derived features or the physical examination of medical data and patient records are more complex, time-consuming, and require a significant level of medical staff involvement. The conventional method does not provide a precise diagnosis, resulting in errors during diagnosis and inefficiencies. Deep learning automates the detection process, making it more efficient and faster. An accurate diagnosis is crucial in cases where early detection is essential for proper treatment. Deep learning models have demonstrated an extraordinary ability to learn nuanced patterns from complex and high-dimensional data. They can automatically extract pertinent information from the images and overcome the limitations of traditional methods. The proposed method addresses the data imbalance issues more efficiently with adaptive synthetic oversampling techniques and makes diagnostics faster. The proposed method combines the predictions of multiple models to make an ensemble and stronger model that learns complex and nuanced patterns from the data. The proposed method is more robust, reliable, and diverse in its decision making. Our objective was to examine the ensemble model's capacity to detect AD and perform feature extraction in order to improve the model's overall effectiveness. The following are main contributions of our study: 1. An efficient ensemble approach was proposed that combines VGG16 and Efficient-Net-B2 for Alzheimer disease classification with high accuracy using multiclass and binary-class datasets, also exploring the effect of transfer learning to improve the performance of the model. 2. The adaptive synthetic oversampling technique was applied to a highly imbalanced dataset to balance the Alzheimer's disease classes. The efficacy of the ADASYN in terms of model overfitting was also investigated to increase the generalization performance of deep learning models. 3. The efficacy of the proposed method was analyzed using k-fold cross-validation and comparing with other state-of-the-art approaches. We also performed a comparison of ensemble and individual deep learning models. In this paper, we organized our content into several sections. Section 2 presents a comprehensive review of the relevant literature. Section 3 outlines the pre-processing, methods, and performance measures. The results and discussion are presented in Section 4. Section 5 provides the concluding remarks for this paper. Literature Review Due to the prevalence and challenging nature of Alzheimer's disease (AD), it poses difficulty for experts regarding diagnosis, which has been extensively studied in the literature. The authors of [21], conducted a study in which they utilized Alzheimer's data to perform a classification process. Their dataset comprised three classes, and they employed Dense-Net as the model, with soft-max serving as the classification layer. The study resulted in an accuracy of 88.9%. While the results were favorable, there remained potential for further improving the accuracy of the model. In addition, Yildirim et al. [22] conducted a study on AD classification using a four-class dataset. They employed convolutional neural network (CNN) architectures and compared the results with their proposed hybrid model, built upon a Resnet50 base and utilizing its knowledge. According to the authors, the hybrid model achieved an accuracy rate of 90%, which outperformed the success rate of pre-trained CNN models. The detection of AD has been extensively researched, and it poses various challenges. The authors of [23] utilized a sparse auto-encoder and 3D CNN to develop a model that could detect disease cases in affected individuals based on the magnetic resonance imaging (MRI) of the brain. The use of three-dimensional convolutions was a significant breakthrough, as it outperformed two-dimensional convolutions. Although the convolution layers were pre-trained with an auto-encoder, they were not fine-tuned, and it was anticipated that fine-tuning would lead to improved performance [24]. Researchers worldwide have shown great interest in classifying AD. The dominant technique for identifying healthy data from fMRI images is to extract features with a CNN, followed by deep learning (DL) classification. The authors of [25] used a deep CNN to classify Alzheimer disease versus normal patients with Alzheimer's functional MRI data and structural MRI data, achieving 94.79% accuracy with the LeNet5 method and 96.84% accuracy with the Google-Net method. Recently, there has been a notable increase in the use of DL methods in various fields because of their superior performance compared to traditional methods. One study [26] developed a hybrid model that involved using extracted patches from an auto-encoder combined with convolutional layers. Another study [23] improved upon this by incorporating 3D convolution. In a previous study [27], auto-encoders arranged in a stack with a soft-max layer were used for classification. Another study [28] utilized standard CNN architectures by intelligently selecting training data and utilizing transfer learning but did not achieve remarkable results. A comprehensive comparison was conducted in another study [29], which examined the results and trained data using scratch with fine-tuning. Based on the findings, in most cases, the latter outperformed the former. Fine-tuned CNNs have been used to solve numerous medical imaging problems, including plane localization in ultrasound images [30]. As discussed above, the use of transfer learning (TL) in the medical discipline is significant for detecting AD with sufficient precision. Other research [31] emphasized the use of unsupervised feature learning, which involved two stages. The first stage was to extract features from unprocessed data using two methods-scattered filtering and uncontrolled neural layer networks. To classify healthy and unhealthy individuals, sparse filtering and regression with soft-max were employed. Additionally, some unsupervised learning techniques, including Boltzmann machines and dispersed coding, were used to dispose of the collected data. The ADNI dataset containing cerebrospinal fluid was used in this approach, with a total of 51 AD patients, including 43 with mild signs of AD. MRI scans were collected using 1.5 T scanners. In their study, the authors of [32] proposed a technique that utilized ML algorithms to gather information about a patient's behavior over time. By employing Estimote Bluetooth beacons, the method accurately determined the location of the patient within the house, with a precision of up to 95%. Gerardin and team investigated the use of hippocampal texture features [33] as an MRIbased diagnostic tool for early-stage AD, achieving a classification accuracy of 83%. They determined that the hippocampal feature outperformed other techniques in distinguishing stable MCIs and MCI to Alzheimer disease converters. Liu and colleagues [34] used stacked DL auto-encoders with soft-max at the output layer to address the bottleneck issue, achieving a remarkable accuracy of 87.67% for multiclass classification with minimal input data and training. The researchers concluded that combining multiple features would lead to more precise classification results. The authors of [35] demonstrated the effect of transfer learning on image classification and showed that fine-tuning produced better results. Alzheimer's disease was diagnosed in [36] employing convolutional-neural-network-based architecture and magnetic resonance brain imaging. The VGG-16 model was deployed as a classification feature extractor. The findings showed that the proposed model for Alzheimer's disease was 95.7% correct. The study [37] introduced a transfer learning strategy to localize plans in ultrasound scans that could transfer knowledge on fewer layers. Another study [38] proposed an architecture that utilized a transfer learning approach for the detection of Alzheimer's disease from a multiclass, open-access series of imaging study datasets. The architecture was tested on pre-processed unsegmented and segmented images. The architecture was tested on both binary and multiclass datasets. The results demonstrated that the proposed architecture attained a 92.8% accuracy on multi-class and an 89% accuracy on binary-class datasets. Iram [39] conducted research on the detection of Alzheimer's disease using biosignals and the most common machine learning models, which facilitated neurodegenerative disease diagnosis at an early stage. The dataset was imbalanced; to fix the imbalance, oversampling and undersampling techniques were employed, and missing values were addressed. Multiple metrics were employed by the author to evaluate the performance. This study emphasized the significance of machine learning and signal processing in the early identification of life-threatening diseases like Alzheimer's. Linear and Bayes classifiers were used. Using the Bayes classifier, the author obtained greater accuracy in diagnosis. Kim [40] developed machine learning algorithms for the identification of Alzheimer's disease biomarkers. The predictive performance of models employing multiple biomarkers was more effective to that of models employing an individual gene. Biosignals were used by Han et al. [41] to identify dementia in elderly people. They employed no artificial intelligence techniques in their analysis. Insufficient participation made it impossible to derive broad generalizations. A number individuals with moderate dementia should be tested from a broader population. Similar to this, another study [42] employed biosignals to analyze cognitive disorders including Alzheimer's and Parkinson's diseases. The authors developed a novel, economical approach for disease identification. Hazarika et al. [43] presented a light-weight, inexpensive, and fast diagnosis method that used brain magnetic resonance scans. They used the DenseNet121 model, which was very expensive and able to detect the disease with 87% accuracy. However, the authors developed and combined two models, AlexNet and LeNet, with fine-tuning. Their method extracted features by utilizing three parallel filters. Their study demonstrated that their model accurately detected the disease with a 93% accuracy rate. The researchers in [44] used the CNN-based transfer learning architecture VGG-16 to classify Alzheimer's disease and achieved 95.7% accuracy. Murugan et al. [45] proposed deep learning for dementia and Alzheimr's disease classification from magnetic resonance images. Several studies in the literature have faced class imbalance issues for Alzheimer's disease detection because imbalanced datasets lead to overfitting, inaccurate results, and low accuracy among deep learning models. Another problem is that there are not enough data available for training deep learning models. Therefore, we utilized the adaptive synthetic technique (ADASYN), which creates new data samples synthetically, as deep learning models perform best with balanced datasets. Proposed Methodology This section describes the Alzheimer's disease dataset, pre-processing, adaptive synthetic oversampling technique, deep learning and ensemble models, model evaluation metrics, and classification results. Figure 1 briefly represents the workflow of the proposed method. The pre-processed dataset was then utilized for training the pre-trained and proposed method to efficiently and accurately detect Alzheimer's disease cases. When the training process was complete, the performance of the models was investigated based on unseen data. In the following subsections, the proposed methodology is discussed. Dataset Description and Pre-processing The two Alzheimer's disease datasets used in this study were collected from Kaggle's data repository. The multiclass dataset contained four classes, namely mild demented, moderate demented (MD), non-demented (ND), and very mild demented (VMD). A person suffering from the ND class experiences disability in terms of behavioral skills, difficulty in learning and remembering things and the skills of thinking and reasoning, and it even affects the patient's personal life. However, dementia is not necessarily caused by aging, and its main sign is not memory loss. In the very mild demented (VMD) stage, the patient starts to suffer memory loss, forgetting where he/she put their belongings, recent names they heard, etc. It is hard to find VMD patients through the cognitive capacity test. In the mild demented (MD) phase, the patient is unable to complete their work properly, forgets their home address, and has a hard time remembering things. These patients are not stable and even forget they have memory issues, because they forget everything. This stage is detected by cognitive testing. The fourth class is moderate demented (MOD), which is the most alarming stage because the patient loses their ability to understand anything and faces problems with calculation; it becomes difficult for them to leave home on their own because they forget the way; and they forget important historical events and activities they performed recently. Table 1 shows the MMSE score and gap between the Alzheimer's disease classes in the dataset. The mild demented class had a 25.12 MMSE score, the moderate demented class 21.77, the non-demented class 23.50, and the very mild demented class 24.51. The average MMSE mean score for all four classes was 23.72, with a 4.49 standard deviation. The largest gap between Alzheimer's disease classes was for the mild demented and moderate demented classes at 3. The smallest gap was 0.59 for the mild demented and very mild demented patients. The images of AD in the dataset were RGB images with different numbers of pixels. The ND class contained 3200 samples, while the MD class contained 896 images, the VMD class contained 2240, and the MOD class contained 64. The only disadvantage of this dataset was that it was imbalanced. To solve this issue, we used ADASYN for class balancing. Another binary MRI Alzheimer's dataset contains 965 AD and 689 MCI images. Medical image pre-processing is very important to achieve quality results and increase the image quality for machine and deep learning [46]. The images had different heights and widths, and to train the deep learning models, we needed fixed-size inputs. Therefore, we resized all the images to a fixed size of 224 × 224 × 3. Adaptive Synthetic (ADASYN) Technique Adaptive synthetic (ADASYN) oversampling technology is used in classification tasks to handle imbalanced classes in datasets. ADASYN creates new synthetic samples from the minority class to address the class imbalance issues. It improves the generalization accuracy of various classifiers. ADASYN is mainly used for object detection, facial expressions, and image analysis to balance the classes. It is a very effective and flexible technique compared to any other oversampling technique. Researchers have utilized the ADASYN oversampling technique to balance an imbalanced dataset for tuberculosis detection from CXR images. They balanced the minority classes with the ADASYN technique to enhance the overall effectiveness of the tuberculosis detection model and achieved a high accuracy compared to other techniques [47]. Table 2 shows the training and testing images after splitting the balanced data. Algorithm 1 shows the steps of the ADASYN technique. Ensemble Deep Learning with Transfer Learning Approach Typically, constructing a deep learning architecture is a challenging task. The weights that one uses in deep learning are allocated before the training phase and changed continuously. Deep learning requires a lot of time to change the weights repeatedly, which leads to the overfitting of the model. Transfer learning (TL) has been the most effective method to overcome the aforementioned problems [48]. Transfer learning leverages previously learned knowledge from pre-trained models trained on large datasets. In addition, it adjusts the hyper-parameters and tunes the hidden layers of pre-trained models. The efficiency of deep learning may be improved by TL, which helps to save time and effort [49]. Ensemble learning is the most essential approach for improving the overall performance of several individual deep learning models. Ensemble learning trains many deep learning models on the same datasets and integrates them so effectively that the predictions made by the models are accurate and the detection accuracy increases [50]. Ensemble learning may be applied in a variety of medical diagnosis tasks. Overall, it improves performance, makes models more robust, and reduces the chances of overfitting. By combining the aspects of several models, deep learning can learn simple and complex patterns efficiently. Five ensemble deep models were used in this Alzheimer's disease detection study to efficiently detect cases of Alzheimer's disease from multiclass and binary-class classification datasets. The input layers, output shape, and parameters of the proposed ensemble model are presented in Table 3. The proposed ensemble deep learning model is shown in Figure 2. Firstly, we imported the VGG-16 and EfficientNet-B2 models from the keras application and other important libraries relevant to the model. The input image shape for the ensemble model was 224 × 224 × 3. Then, we loaded both the pre-trained deep learning models with includetop equal to false (without top layers). The input shape for the ensemble models was created and kept the same. After that, we concatenated the output of both the VGG-16 and EfficientNet-B2 models using the "concatenate" function. A dropout layer was added immediately after the concatenation layers. The flatten layers function was used to convert the features into a specific format that was acceptable for the fully connected layer. We then fine-tuned the other layers to accelerate the training steps and increase the overall progress. Four batch normalizations and three dense layers were used with activation functions. Batch normalization is a very popular method that normalizes layers as well as providing stability to neural networks. It also makes learning easier and faster. The testing accuracy may be improved with batch normalization, depending on the type of data. Dense layers are regularly used for image classification. Finally, the model was compiled with the "Categorical Cross-entropy" loss function and Adam optimizer. Fine-Tuned Individual Deep Learning Models This subsection covers a brief description of certain deep learning (DL) models, namely convolutional neural networks (CNNs), DenseNet121, VGG16, Xception, and EfficientNet-B2. It also analyzes the performance of the trained model using performance metrics like accuracy, AUC, recall, precision, and F1 score. CNN CNNs are considered the most significant DL models. Unlike traditional matrix multiplication, CNNs employ convolution in their operation. Their primary application is in object classification using image data. CNNs are a type of deep learning model that are widely used for image and video processing tasks. The structure and function of the visual cortex in the brain inspired these networks. A CNN's operation involves several processing layers, including convolutional layers, pooling layers, and fully connected layers. Overall, CNNs are powerful tools for pre-processing tasks and have been used for various applications, including object detection, facial recognition, and autonomous driving [51,52]. The CNN architecture is shown in Figure 3. It took an input size of 224 × 224 × 3. The CNN architecture had three convolutional two-dimensional layers followed by the RelU activation function, three max pooling layers, and three batch normalization layers. Then, a flattening layer was added to follow the dropout layer. Two dense layers were included, one followed by the activation of the 'RelU' function and the other by the activation of the soft-max layer. DenseNet121 DenseNet121 [53] is a CNN architecture that has been commonly employed for image classification tasks. It was introduced in 2017 as an improvement upon the previous popular architectures such as VGG and ResNet. DenseNet121 employs a dense connectivity pattern, where each layer receives feature maps from all previous layers and passes its feature maps to all successive layers. This dense connectivity allows for better gradient flow and parameter efficiency and reduced vanishing gradient problems. The architecture has 121 layers, including convolutional, pooling, and dense blocks, and has achieved state-of-the-art performance on several benchmark datasets such as ImageNet. EfficientNet-B2 EfficientNetB2 is a CNN architecture that is part of the EfficientNet family of models. It was designed to provide an optimal balance between model size and performance for image classification tasks. EfficientNetB2 is larger and more complex than the original EfficientNetB0 model, but it maintains the same basic structure, including the use of compound scaling to balance depth, width, and resolution. EfficientNetB2 has 7.8 million parameters. It is often used as a baseline model for transfer learning or fine-tuning specific image classification tasks [54]. VGG16 VGG-16 is a deep CNN architecture that was developed by the visual geometry group (VGG) at the university of Oxford in 2014. It is a widely used model for image recognition tasks and has achieved state-of-the-art results in many computer vision (CV) benchmarks. The architecture of VGG16 contains 16 layers, including 13 convolutional layers and 3 fullyconnected layers. The convolutional layers have small 3 × 3 filters and are placed on top of each other, increasing the depth of the network. The use of small filters with a small stride size helps preserve spatial information and enables the network to learn more complex features [55]. Xception Xception is a deep CNN architecture that was proposed in 2016. It was inspired by the inception architecture but differs from it by replacing the standard convolutional layers with depth-wise separable convolutions. This approach minimizes the number of training parameters and computations, resulting in faster and more efficient training. Xception also employs skip connections to allow for better gradient flow and improved accuracy. The architecture has achieved state-of-the-art results on various image classification benchmarks such as ImageNet, and it has been widely used in computer vision applications [56]. Performance Measures Evaluation metrics are quantitative measures used to assess the performance of a model or system in solving a specific task. The model's classification results could be divided into four classes: true-positive (TP), true-negative (TN), false-positive (FP), and false-negative (FN). TP refers to correctly identified positive instances, while TN refers to accurately identified negative instances. FP represents falsely predicted positive instances, and FN represents falsely predicted negative instances. Various evaluation parameters were utilized in this study, including recall, precision, accuracy, AUC, and F1 score. Results and Discussion Experiments were conducted using a Hewlett Packard Core i5 , sixth-generation, 25 GB RAM, and a colab Pro GPU that was manufactured by Google were used in this study. This section presents all the experiments conducted on the binary and multiclass Alzheimer's brain disease datasets. We utilized efficient ensemble deep learning architectures that consumed minimum resources. We utilized a 32-bit batch size, 15 epochs, a learning rate of 0.0001, a cross-entropy loss function, Adam, and an SGD optimizer. Results of Individual Fine-Tuned Deep Learning Models Experiments were conducted using individual fine-tuned deep learning models including VGG-16, DenseNet-121, EfficientNet-B2, CNN, and Xception. These individual models were trained and tested using a loss function named categorical cross-entropy for mild demented, moderate demented, non-demented, and very mild demented cases and an Adam optimizer to optimize the performance. A batch normalization layer was added to Efficient-Net-B2, Xception, and VGG-16 to increase the training process, reduce the learning time, and lower the generalization errors. Moreover, a dropout layer was utilized to avoid overfitting. There were 50 epochs implemented for each model. Table 4 presents the results of the individual pre-trained models. For the individual models, DenseNet-121 attained the lowest accuracy, precision, recall, F1 score, and area under the curve for Alzheimer's disease multiclass classification. The second most poorly performing deep model was Xception, which achieved a 75.04% accuracy and 93.70% area under the curve. Both the CNN and VGG-16 models achieved almost the same classification accuracy. The fine-tuned high-performance model EfficientNet-B2 achieved a 95.89% accuracy and 95.95% recall score. EfficientNet-B2 performed well in individual deep learning models. Figure 4 shows the performance comparison of individual models using various metrics. DenseNet-121 and Xception performed poorly in terms of recall score and F1 score. EfficientNet-B2 performed exceptionally, in addition to VGG-16. The area under the curve (AUC) was better than the other metrics. Results of Ensemble Deep Learning Models with Multiclass Dataset The ensemble deep learning model results are presented in Table 5. The ensemble EfficientNet-B2 and DenseNet-121 model achieved a 96.96% accuracy, 97% precision, 96.98% recall, 96.93% F1 score, and 99.60% area under the curve (AUC) score. The second VGG-16-DenseNet-121 ensemble model achieved a 95.56% accuracy and 98.75% AUC. The EfficientNet-B2+Xception model achieved a 96.26% accuracy, 96.50% recall, and 99.11% AUC. Xception+DenseNet-121 achieved a 91.05% accuracy. The proposed VGG-16+EfficientNet-B2 model achieved a 97.35% accuracy score and a 99.64% area under the curve (AUC). All the ensemble models performed well and accurately detected the AD cases from the multiclass dataset. The DenseNet-121+Xception ensemble model achieved an 18% higher accuracy than the individual DenseNet-121 and Xception models. The other ensemble model achieved 1.46% better results when we compared it with the individual EfficientNet-B2 model. The performance comparison of the ensemble models is presented in Figure 5. Among the ensemble models, VGG-16+EfficientNet-B2 performed efficiently, with high performance metrics. The Xception model with Efficient-Net-B2 provided better results than the individual Xception model. Similarly, DenseNet-121 with VGG-16 performed with high accuracy for detecting Alzheimer's disease. The experiments proved that the ensemble models provided excellent results compared to the individual models in terms of all performance metrics. The results of the ensemble deep learning models using the imbalanced dataset are shown in Table 6. The ensemble model of EfficientNet-B2 and DenseNet-121 obtained an accuracy of 92.82%, a precision of 94.29%, a recall of 93.76%, an F1 score of 91.52%, and an areaunder-the-curve (AUC) score of 99.38%. The second ensemble model of VGG-16-DenseNet-121 had an accuracy of 91.52% and an AUC of 98.98%. The EfficientNet-B2+Xception model had an accuracy of 90.45%, a recall of 87.80%, and an AUC of 98.80%. Xception+DenseNet-121 obtained an accuracy of 89.29%. The proposed VGG-16+EfficientNet-B2 model obtained an accuracy score of 95% and an AUC of 99.41%. All ensemble models achieved outstanding performance and accurately identified AD cases in the multiclass datasets. Using the imbalanced dataset, the DenseNet-121+Xception ensemble model achieved an 8% lower accuracy. The accuracy of another ensemble model was 7% lower when compared to the balanced dataset. Figure 6 displays the performance comparison of the ensemble models using the imbalanced dataset. Among the ensemble models, VGG-16+EfficientNet-B2 performed effectively, with high performance metrics. In comparison to previous models, the DenseNet-121 model with Efficient-Net-B2 offered superior results. In the same way, DenseNet-121 with VGG-16 showed good performance in identifying Alzheimer's disease. The results showed that the ensemble models with an unbalanced dataset also produced better results. The experiments, however, showed that the proposed approach achieved 2.35% higher accuracy when utilizing the balanced dataset. Table 7 presents the results of the proposed model with different learning rates to check the impact of the learning rates on the model performance. During the training phase, it was essential to select the appropriate learning rate in order to ensure that the model weights were properly updated. We achieved a 94.47% accuracy and 98.53% AUC by utilizing a 0.01 learning rate. In another experiment, the learning rate was set to 0.001, and a 97.30% accuracy was achieved. When the learning rate was set to 0.0001, we attained a model accuracy of 97.35% and a 99.64% AUC. The confusion matrix results of the ensemble deep learning models are shown in Figure 7, where label 0 indicates moderate demented, label 1 indicates non-demented, label 2 indicates mild demented, and label 3 indicates very mild demented. The VGG-16+EfficientNet-B2 model produced 100% true predictions for non-demented cases. The Xception+DenseNet-121 model produced 98% true predictions for non-demented and mild demented Alzheimer's cases. The Exception+EfficientNet-B2 model also produced the same 100% true predictions for non-demented case. The VGG-16+DenseNet-121 model achieved 91% true predictions for the moderate demented class. The results hence showed that the VGG-16+EfficientNet-B2 model predictions were very good. The training-testing accuracy and loss are displayed in Figure 8a. We observed that the training accuracy was 81.34 at epoch 1, and by epoch 10, we started to see variations in the data. We chose to train the ensemble deep learning models for 50 epochs, and we were able to improve their performance. Figure 8b shows the performance curves of the ensemble EfficientNet-B2+DenseNet-121 model, where the training accuracy was at its highest point at epoch 45 Figure 8d,e shows that the testing loss for the ensembles of Xception+DenseNet-121 and Exception+EfficientNet-B2 was high compared to that in Figure 8a,b. Results of Ensemble Deep Learning Models with Binary-Class Dataset The results of the ensemble models were also evaluated on the binary-class Alzheimer's disease dataset to test the effectiveness of the proposed model, as shown in Table 8. The EfficientNet-B2+DenseNet-121 model achieved a 95.45% accuracy, 95.10% precision, 95.45% recall, 95.50% F1 score, and 98.68% area-under-the-curve (AUC) score. The second ensemble VGG-16+DenseNet-121 model achieved a 94.90% accuracy and 98.43% AUC. The EfficientNet-B2+Xception model achieved a 91.80% accuracy, 91.80% recall, and 97.34% AUC. The Xception+DenseNet-121 model achieved a 91.05% accuracy. The proposed VGG-16+EfficientNet-B2 model achieved a 97.07% accuracy score and 99.59% area under the curve (AUC). All the ensemble models performed outstandingly and accurately detected the AD cases for the binary-class dataset. The proposed ensemble model also achieved a remarkable 97.07% accuracy on the binary-class classification dataset. K-Fold Cross-Validation Results for Ensemble Models The performance and feasibility of the proposed ensemble model were also evaluated with k-fold cross-validation. The results of the cross-validation are displayed in Table 9. The experiments validated that with k-fold cross-validation, the performance was also outstanding. The VGG-16+DenseNet-121 model achieved an accuracy score of 0.942 with a +/− 0.02 standard deviation. EfficientNet-B2+ DenseNet-121 achieved an accuracy score of 0.961 with a +/− 0.04 standard deviation. VGG-16+ EfficientNet-B2 achieved a 0.963 accuracy and a +/− 0.03 standard deviation. The results suggested that the proposed ensemble model was fit and accurate enough to detect Alzheimer's disease from the multiclass MRI image dataset. Comparison of Proposed Ensemble Model with Previous Studies To show the effectiveness and robustness of the proposed ensemble model, we performed a comparison of the proposed method with previous studies discussed in related work. Table 10 depicts the results comparison for the detection of Alzheimer's disease cases. We chose those studies from the literature that considered multiclass datasets for the comparison with the proposed method. Jain et al. [39] proposed convolutional neural networks for AD classification using multiclass images with 95.73% accuracy. Similarly, another researcher [42] used the CNN-based transfer learning architecture VGG-16 to classify Alzheimer's disease and achieved 95.70% accuracy. Yildirim et al. [23] employed hybrid deep CNN models using a multilclass Alzheimer's dataset and attained 90% accuracy. Liu et al. [22] utilized a multi-deep CNN for automatic Alzheimer's disease classification with the lowest accuracy. The results shown in the comparison table were not satisfactory due to the low accuracy and the fact that the models were not properly utilized to achieve outstanding results. However, our proposed ensemble model classified Alzheimer's disease with the highest accuracy and was more efficient than any other individual or previous pre-trained models. Conclusions The timely diagnosis and classification of Alzheimer's disease using multiclass datasets is a difficult task. To detect and treat the disease, an accurate automatic system is required. This study proposed a deep ensemble model with transfer learning techniques to detect Alzheimer's disease cases from a multiclass dataset. The Alzheimer disease dataset was highly imbalanced, and we used adaptive synthetic oversampling (ADASYN) to balance the classes. The proposed model achieved an accuracy of 97.35% in detecting disease cases. The DenseNet-121+Xception ensemble model achieved an 18% higher accuracy than the individual DenseNet-121 and Xception models. Another ensemble model achieved 1.46% better results when we compared it with individual EfficientNet-B2. Our proposed ensemble model was less time-consuming, more efficient, worked well even on small datasets, and did not use any hand-crafted features. The deep learning automatically extracted relevant and key features from the samples, and an ensemble of deep learning models captured various aspects of the given samples in depth. In the future, we will collect and evaluate larger amounts of data to quickly and precisely diagnose Alzheimer's cases and combine various types of data to enhance the accuracy of detecting models.
2023-07-29T15:10:51.709Z
2023-07-26T00:00:00.000
{ "year": 2023, "sha1": "c38eb6ace5e207ed7e4cbcfad7519465a63defeb", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "b2abb9ee36b4792c0743a18cf9a79eb4ce739529", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
70888539
pes2o/s2orc
v3-fos-license
FVC , FEV 1 , FEV 1 / FVC ratio and FEF 2575 % in End-stage renal disease ( ESRD ) patients undergoing maintenance haemodialysis Background: End-stage renal disease causes multiple pulmonary complications and lung functions are decreased in ESRD patients undergoing maintenance haemodialysis. Objectives: To observe FVC, FEV1, FEV1/FVC ratio and FEF25-75% in ESRD patients undergoing maintenance haemodialysis to evaluate their lung functions status. Methods: This cross sectional study was carried out in the Department of Physiology, BSMMU, Dhaka, from July 2011 to June 2012. For this, 30 ESRD patients aged 25-55 years undergoing maintenance haemodialysis with less than 1 year duration were studied and 30 age, sex matched healthy subjects were taken as control. Patients were selected from the Nephrology department of BSMMU, Dhaka. FVC, FEV1, FEV1/FVC ratio and FEF25-75% were measured by a Digital Spirometer. For statistical analysis Independent Sample‘t’ test and One way ANOVA test were performed as applicable. Results: The mean percentage of predicted values of FVC, FEV1 and FEF25-75% were significantly lower in patients except FEV1/FVC ratio which was almost similar to control. 63.33% patients had restrictive and 36.67% patients had both restrictive and obstructive (small airway obstruction) feature. Conclusion: This study concluded that some pulmonary functions were markedly reduced in ESRD patients undergoing maintenance haemodialysis. In addition most of the patients were suffering from restrictive and some of them were affected with both obstructive and restrictive type of pulmonary disorders. nd -stage renal disease (ESRD) is an irreversible decline in kidney function, which is severe enough to be fatal in the age is 61years and most of the patients die due to lack of renal replacement therapy 3 . The commonest cause of the ESRD is glomerulonephritis (40%) with diabetes mellitus (24%) & hypertension (13%) being the other leading cause 4 .Additional risk factors for developing ESRD include a history of heroin abuse, tobacco, analgesic use, ethnicity, obesity, lower socioeconomic status, hyperuricaemia & family history of kidney disease 5 .Clinical feature absence of dialysis or transplantation 1 .In Bangladesh the prevalence of ESRD is 150-200 per million populations 2 .The mean age of the ESRD patients is 42 years, which is similar to those of India and Pakistan but much less than that of the developed countries where the mean E of ESRD include hypertension, anemia, anorexia, nausea, vomiting, autonomic neuropathy, myopathy, generalized weakness, metabolic bone disease, metabolic acidosis, fluid & electrolyte imbalance 6 . Pulmonary complications such as pulmonary edema & pleural effusion are common in ESRD 7 .Others are pulmonary fibrosis & calcification, pulmonary hypertension, hemosiderosis & pleural fibrosis 7,8 .Haemodialysis associated hypoxemia due to fall in white cell count is a response to intrapulmonary sequestration of neutrophils which causes partial obstruction of capillary blood flow in the lung & may induce acute changes in lung function 9 . Pulmonary edema due to hypoalbuminemia which decrease plasma oncotic pressure & increased pulmonary capillary hydrostatic pressure leads to alterations of pulmonary intravascular starling forces & increases pulmonary capillary membrane permeability & allowing for the efflux of protein rich fluid from the capillary into the lung 10 .Metastatic pulmonary calcification occurs in 50% or more of renal patients with metastatic soft tissue calcification 11 .This lesion predominantly affects alveolar septa with varying degrees of fibrosis, calcification & macrophagic giant cells and causes septal thickening 11,12 . In fact, some authors have reported that 75% of patients on long term haemodialysis had history of restrictive pulmonary abnormalities 13 .Several investigators of different countries reported lower spirometric lung function variables in haemodialysis patients [14][15][16][17] .Of these variables FVC, FEV 1 , FEV 1 /FVC ratio and FEF 25-75% were most commonly evaluated. In Bangladesh, assessments of pulmonary functions among ESRD patients undergoing maintenance haemodialysis have not yet been done.Though other studies found restrictive feature in ESRD patients but no studies reported obstructive feature in these group of patients.Therefore, the present study has been undertaken to assess some aspects of pulmonary function status in ESRD patients undergoing maintenance haemodialysis and also to detect the presence of pulmonary function disorders. Methods This cross sectional study was carried out in the Department of Physiology, BSMMU, Dhaka, between July 2011 to June 2012.Thirty male ESRD patients aged 25 to 55 years undergoing maintenance haemodialysis with a duration of less than 1 year were taken as study group.Thirty age, sex matched healthy subjects were taken as control.Study protocol was approved by ethical Review Committee of BSMMU, Shahabag, Dhaka.Patients were randomly selected from the Nephrology Department of BSMMU, Dhaka.Subjects with history of acute or chronic lung & chest wall disease e.g.pneumonia, tuberculosis, COPD, malignancy etc, history of coronary heart disease, diabetes mellitus, alcohol/tobacco users and smokers were excluded from the study.After selection of the subject objectives, benefits of this study were explained to each subjects and encouraged for voluntary participation.When they agree to participate, an informed written consent was taken from each subject.A detail personal, medical, family, socioeconomic, occupational and drug history were recorded in a preformed questionnaire and thorough physical examinations were done and were documented.For the assessment of lung function FVC, FEV 1 , FEV 1 /FVC ratio and FEF 25-75% of all the subjects were recorded by a digital Spirometer.Data were expressed as mean of percentage of predicted value ± SD and also in percentage of the frequency.Independent sample't' test was done to compare between the groups by using SPSS for windows version 12.0 as applicable.P value <0.05 was accepted as level of significance. Results The mean of the percentages of predicted values of FVC, FEV 1 and FEF 25-75 were significantly lower (P<0.001) in study group than those of control.Again the mean percentage of predicted value of FEV 1 /FVC% was higher in study group in comparison to that of control but the difference was statistically non significant (Table I). Among the ESRD patients, 36.67%patients had both restrictive and obstructive type of and 63.33% patients showed features of only restrictive type of lung dysfunction (Figure I).were significantly lower than those of apparently healthy subjects.Similar findings were also reported by investigators of other countries 14- 17 .But FEV 1 /FVC ratio was found higher in this group of ESRD patients than those of healthy subjects and the differences between the groups were statistically non significant.In the contrary lower values of this ratio were also reported by other researcher 15.In this study, all the patients had pulmonary functional abnormality.In addition, restrictive type of lung function disease was found in 63.33% and both restrictive and obstructive pattern were found in 36.67% of these ESRD patients.But Bush and Gabriel (1991) found that 20% patients under haemodialysis in their study had normal lung function, 30% patients had restrictive and 20% patients had both restrictive and obstructive features 10 .Karacan et al. found 59.26% patients had normal lung function and 18.52% patients had restrictive and 7.4% had both restrictive and obstructive dysfunction 17 . Literature survey proposed different mechanisms linking pulmonary dysfunction to renal failure.It has been suggested that uremia exerts toxic effects on the vascular endothelium leading to inflammation of pulmonary capillaries which may enhances its permeability.So there may be efflux of protein-rich fluid from the capillaries leading to pulmonary edema.This effect causes increased resistance in the small airways and alveoli which causes narrowing of small airways 18 . Research evidences suggested that restrictive and obstructive lung disease in these patients may attributed to pulmonary hypertension, pulmonary edema, pulmonary fibrosis and calcification, pleural effusion and pleural fibrosis 12,18,19 . From the nature of the present study, the exact mechanism of pulmonary involvement in ESRD patient cannot be elucidated.However, the above mentioned factors may have the role for the pulmonary involvement in this group of patients. This study also revealed that patients under haemodialysis may present features of pulmonary dysfunction and showing intensive lung Discussion In this study, the mean percentages of predicted values of FVC, FEV 1 and FEF 25-75% in ESRD patients undergoing maintenance haemodialysis involvement in end-stage renal failure.The magnitude of pulmonary abnormality in this study is quite high compared to the other similar studies 10,17 .Furthermore restrictive type of dysfunction is evident in majority patients.This is supported by the history of moderate degree dyspnea affecting most of the patients. Conclusion The result of the study concluded that the spirometric lung function variables were decreased in ESRD patients undergoing maintenance haemodialysis and may suffering from either restrictive or both restrictive and obstructive pulmonary disorders.
2019-01-02T07:28:42.094Z
2013-10-25T00:00:00.000
{ "year": 2013, "sha1": "5f123a34eccedf89f9b2bd6c2c5c4411ec4cb503", "oa_license": "CCBYNCSA", "oa_url": "https://www.banglajol.info/index.php/JBSP/article/download/16645/11716", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5f123a34eccedf89f9b2bd6c2c5c4411ec4cb503", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
256055409
pes2o/s2orc
v3-fos-license
Histopathological Changes in the Myocardium Caused by Energy Drinks and Alcohol in the Mid-term and Their Effects on Skeletal Muscle Following Ischemia-reperfusion in a Rat Model Background: Although energy drinks have been consumed for many years, their effects on the cardiovascular system continue to be investigated. Today, the most frequently used area of energy drinks is the entertainment sector, and this study investigates the effects of energy drinks and alcohol consumption on rats’ limb and myocardium tissue. Methods: Forty Wistar Albino rats were used and divided into 4 groups. Energy drinks were given to the first group (the energy drink group), alcohol was given to the second group, and energy drinks and alcohol were given to the third group Redbull-Alcohol (RA). Blood samples, leg muscles, and heart tissues were studied after the ischemia–reperfusion model was created at the infrarenal level. Results: In the histopathological examination of heart muscles, the damage was significantly more severe in the RA group than in the control group (P < .05). There was no significant change in the RA group in the limb muscle; however, muscle fiber abnormality was higher. The energy drink group was more prone to carbon dioxide retention and hypoxia, resulting in respiratory acidosis. (P = .05). Lactate was significantly higher in the energy drink group (P = .002). Glucose concentrations of energy drink and RA groups were higher (P = .02). Conclusion: The high lactate values of the energy drink group and more damaged fibers in the striated muscles in the RA group showed that they are more susceptible to ischemia. Long-term energy drinks and alcohol use may cause damage to the heart muscle and endothelium. Also, the effects of long-term alcohol and energy drink use on the respiratory system should be investigated with more specific studies. INTRODUCTION Energy drinks (EDs) have been produced since 1949, and their use became widespread after the 1980s in western countries. 1,2 Energy drinks entered the Turkish market in the 1990s. Energy drinks are consumed worldwide and in our country without any restrictions. Energy drinks have been consumed widely for almost 40 years, and too much research has been published on human clinical trials and animal models. Although it was introduced to the market as the fluid and energy supplier in sports activities and concentration booster, today's most frequently used area has been the entertainment industry. Because it suppresses alcohol intoxication and hangover symptoms, it allows more alcohol, and its use for this purpose continues to increase and become widespread. High sugar, caffeine, taurine, sodium, and increased alcohol intake may cause cardiac, gastrointestinal, and neurological symptoms. 3 After using these beverages, their service has been questioned worldwide since patients come with these clinical scenarios in a broad spectrum. Based on sporadic case experiences, experiments on animal models have been conducted, and the adverse effects of these beverages on various organ systems have been revealed when used alone or in combination with alcohol. [4][5][6][7][8] Since the early 1980s, many animal experiments have been carried out using these beverages. It has been shown in these experiments that long-term use of EDs reduces the resistance of tissues to ischemia-reperfusion damage, lowers the epilepsy threshold, causes insulin resistance due to the high amount of sugar it contains, and increases alcohol-related side effects as it increases alcohol consumption. 3,4,6,9 The aim of this study was to show the histopathological changes in the rat myocardium and abdominal aorta by the mid-term use of EDs alone or in combination with alcohol and to compare the histopathological changes at the tissue level with the established ischemia-reperfusion model. METHODS Ethics committee approval of the study was received on November 4, 2019, from the same Animal Laboratory Ethics Committee with the decision number of 2019/31. Wistar Albino-type rats were randomly selected with an average weight of 200-300 g bred in this center. The animals were kept in cages in 4 groups under standard room conditions. Forty rats were included in the study, and they were given free access to water and fed with pellet feeds. Animal Selection Forty rats were randomly selected and divided into 4 groups for the study. About 1.5 mL/100 g ED (Red Bull) was given to 1 of the groups, 0.486 mg/100 g dose of vodka alcohol to 1 group, and both ED and vodka-type alcohol to another group at the same dose for 1 month. The fourth group was assigned as the control group. The drinks were added to the water consumed by the rats daily. Gavage was not used to avoid stress. We preferred the Red Bull brand as ED because it is the most well-known and consumed brand in the market. We selected vodka-type alcohol as the locally produced Binboa brand is also drunk widely. We preferred vodka because it is the most consumed type of alcohol along with EDs. This protocol was conducted based on previous researches. 10,11 Pre-experiment Preparation The feeding of the rats was stopped 12 hours before the experiment. During these 12 hours, water consumption was not restricted. Ketamine at a dose of 60 mg/kg and xylazine at a dose of 5 mg/kg were used as anesthetics. Anesthesia was continued with additional doses if needed. After anesthesia, the animals' thorax, abdomen, and left legs were shaved. Animals were sacrificed after the experiment. Surgical Procedure After anesthesia and shaving, laparotomy was performed, and the peritoneum was opened. The intestines were retracted, and the abdominal aorta was found. An atraumatic bulldog clamp was placed on the abdominal aorta at the infrarenal level. In order to reduce the loss of fluid from the abdomen during ischemia-reperfusion periods, the skin was closed with a single temporary silk suture, and the surgical site was closed with hot wet gas. After 20 minutes of ischemia, the suture was opened, the bulldog clamp was removed, and the skin was approached again during the 20-minute perfusion period. During the waiting period of 40 minutes, the animals were kept at room air and room temperature. After 40 minutes, phlebectomy and cardiotomy were performed after sternotomy under deep anesthesia. There are lots of varieties of ischemia-reperfusion models. [12][13][14][15][16] We chose infrarenal occlusion to reduce the side effects of ischemia on the kidneys and liver. The ischemia-reperfusion period was limited to 40 minutes to avoid respiratory side effects because we kept animals in room air without ventilation. About 1 mL of blood taken by phlebectomy was separated, and blood gas analysis was performed without waiting. The remaining part was transferred to the biochemistry tube and studied in the biochemistry laboratory. After cardiotomy, the abdominal aorta was removed, covering the upper and lower parts of the clamped section. Simultaneously, a sample was taken from the gastrocnemius muscle of the left leg. Tissue samples were stored in formaldehyde solution at +4°C in the refrigerator. Examination of Blood Samples First of all, blood samples were studied with Siemens RAPID Point 500 model blood gas device. The pH, partial pressure of carbon dioxide (PaCO 2 ), partial pressure of oxygen (PaO 2 ), peripheral oxygen saturation (SpO 2 ), lactate, hemoglobin, calcium, sodium, and base exchange parameters were studied. The biochemistry parameters of troponin, aspartate aminotransferase (AST), alanine aminotransferase (ALT), glucose, and cholesterol panel [total cholesterol, high-density lipoprotein (HDL), low-density lipoprotein (LDL), and triglycerides] were checked. Somogyi-Nelson colorimetric measurement for glucose concentration 17,18 and cholesterol levels were measured by the ferric chloride method. 19 Aspartate aminotransferase and ALT levels were measured by the Reitman and Frankel 20 photocolorimetric method. Troponin levels were measured by the sandwich immunoassay method. Histological Preparation Tissues fixed with formaldehyde were kept in cassettes for 1 night at +4°C. They were then washed with tap water for 6 hours. After washing, the tissues were dehydrated with a series of rising alcohol. Tissues were first incubated with 70% alcohol for 1 night and then with 80% (1 hour), 90% (1 hour), 96% (30 minutes + 30 minutes), and 100% (30 minutes + 30 minutes) alcohol. Then they were kept in toluene (15 minutes + 15 minutes) for transparency. Tissues were held in toluene/paraffin (50/50%) mixture for 45 minutes. Then, the tissues taken in pure paraffin were kept in an oven at 60°C for 2 hours and blocked by embedding in hard paraffin. HIGHLIGHTS • Energy drinks (EDs) and alcohol may impair gas exchange. • Energy drinks and alcohol may cause hypertriglyceridemia and glycemia. • Long-term use of EDs and alcohol together showed significant histopathological changes in the myocardium and striated muscle following ische mia-r eperf usion . After trimming the paraffin blocks, they were cut at 4.5 µm thickness with a Leica SM 2010R microtome. The preparations were kept in an oven at 60°C overnight. Hematoxylin and Eosin Staining The preparations were deparaffinized with toluene. Then, rehydration was achieved by passing through a series of decreasing alcohol (100%, 100%, 100%, 96%, 90%, 80%, 80%, and 70%) and finally the preparations were washed with distilled water. The preparations were kept in hematoxylin for 15 minutes and were washed in tap water for 15 minutes. Afterward, the differentiated preparations with acid alcohol (1 second) were again kept in tap water for 10 minutes. The preparations, kept in lithium carbonate for 1 minute, were first rinsed in tap water and then in distilled water. Preparations kept in eosin for 1.5 minutes were washed with distilled water. It was passed through an increased series of alcohol for dehydration (70%, 80%, 90%, 96%, 96%, 100%, 100%, and 100%). The preparations were completely removed from the water by soaking in toluene for 30 minutes and were closed with Entellan. Stained slides were evaluated with the Olympus BX61. Histological damages were scored as disorganization, degeneration of the muscle fibers, inflammatory cell infiltration, and vasocongestion (0: normal, 1: mild, 2: moderate, 3: severe) (minimum score: 0, maximum score: 12). 21 Statistical Analysis The distribution of variables was classified in the study, and descriptive results were obtained using SPSS version 23 (Statistical Package for the Social Sciences for Windows) program. The normality of data was analyzed using Shapiro-Wilk and Kolmogorov-Smirnov tests. Continuous variables are presented as median with interquartile range (IR). Nonparametric tests were used for significant intergroup results since the number of animals was low. Since the number of groups was 4, the groups were compared using the nonparametric Dunn's multiple comparison test. Another nonparametric Mann-Whitney U-test was used for 2-group analysis. A statistically significant difference was accepted with a P-value of <.05. Blood Gas and Biochemistry Blood gas values studied from intracardiac samples taken after 20 minutes of ischemia and 20 minutes of reperfusion period are shown as median with IR in There was a statistically significant difference between groups in LDL (P = .004), but there was no statistically significant difference between groups in triglyceride values (P = .08). While the lowest HDL and total cholesterol values were in the RA group, the highest values were in the ED group (P = .001 and .004, respectively). While the highest troponin value was in the control group, the lowest troponin level was in the RA group (P = .04). Histological Examination No abnormal findings were observed in the control group's microscopic examination of myocardial tissues. Cardiac muscle cells were observed in normal morphology except for inflammatory cells in the connective tissue of the alcohol group. In contrast, cardiac muscle cells with damaged cytoplasm were observed in some areas in the ED group. Also, eosinophilic heart muscle fibers were observed rarely in the ED group. In the RA group, many inflammatory cell infiltrations and a large number of damages to cardiac muscle cells were observed ( Figure 1). Abnormal morphology was observed in the vascular endothelium in some parts of the heart walls. According to the total histopathological score, the damage is significantly more severe in the RA group than in the control group (P < .05, Table 3). Striated muscle fibers with normal morphology were observed in the control group. Additionally, there were few damaged fibers in alcohol and ED groups. In the RA group, damaged muscle fibers were observed in some areas of the striated muscle tissue (Figure 2). According to the total histopathological score, there is no statistically significant change in the RA group compared to the control group; however, muscle fiber abnormality is higher than in the other groups ( Table 4). DISCUSSION This study aims to determine the effects of EDs and alcohol on the rat's heart and muscle tissues following an ischemiareperfusion model. According to histopathological examination, rats that consumed both ED and alcohol had more damaged cells and abnormal heart and striated muscle tissue structures. Also, all experimental groups were more prone to acidosis, hyperglycemia, and a higher triglyceride level. To demonstrate the metabolic effects of EDs and alcohol on the organism, we analyzed each animal's blood gas and the abovementioned biochemical parameters. There was no significant difference in the pH value between the control, alcohol, and RA groups. Still, there was a significant difference between ED and the other groups (P = .05). The median PaCO 2 and PaO 2 were similar between study groups yet worse in the ED group. However, respiratory components were better and statistically significant between the control and study groups. In the literature, there are many human case reports and studies about patients applied to clinics with lung problems such as asthma attacks and bronchospasm after using EDs and beverages containing high sugar and caffeine such as EDs. Varraso et al 22 showed that a highcarbohydrate western diet triggered an asthma attack. Wood et al 23 reported that rapid transition to a high-dose carbo hydra te-co ntain ing diet adversely affected inflammation in the airways. Since there is no study related exactly to ED and respiratory systems, it may be more accurate to perform such an ischemia-reperfusion model by ensuring airway safety by intubation or tracheostomy so that it does not adversely affect the results of the investigation. In addition, the study of the effects of these beverages on the respiratory system will shed light on future studies. Another most striking value among the blood gas parameters was lactate. The median value for lactate in the ED group was 2.39 mmol/L (IR: 2.15), and the P-value was found to be .002 between groups. In the double-blind study conducted by Ferreira et al 24 on 14 healthy adult males, the subjects were divided into 4 groups, just as in our study. The subjects underwent an exercise protocol, and at this stage, their oxygen consumption, ventilation, and respiratory parameters were recorded by ergospirometry, and their blood lactate levels were measured. Lactate levels (30 minutes after drug ingestion, 30 and 60 minutes after the effort test) were higher in the alcohol and alcohol + ED sessions compared with the control session. A double-blind experiment conducted by Lara et al 25 with 14 young adult male swimmers separated the groups into placebo and EDs. They found that the blood lactate levels were statistically significantly higher in the ED group after the exercise protocol. 25 The increased lactate value resulting from ischemia-induced anaerobic glycolysis suggests that the ED group is more affected by ischemia. Energy drinks contain high sodium and calcium, which are essential at the cellular level in ischemia-reperfusion injury; therefore, we studied these 2 electrolytes. Calcium was more elevated in the control group than in the other groups, but it was not statistically significant. We could not find any direct or indirect study on calcium levels related to EDs in the literature. Since calcium is an essential element in myocardial and muscle contractility and hemostasis as factor 2, we found it worth writing the results in our study. As expected, the sodium value in the control group was lower than all the other groups, and the highest median values were in the alcohol and the RA group. For many years, it has been known that diets with high sodium content cause intravascular volume overload and hypertension by increasing the plasma sodium level. 26 Glucose level was higher in the ED and RA group than in the other groups. Although median glucose values were similar between the alcohol and the control group, they were slightly lower in the alcohol group. Since alcohol consumption inhibits both gluconeogenesis and glycolysis, it creates hypoglycemia when used alone, while it can create hyperglycemia when consumed with foods containing high carbohydrates. 27 We think that the increased glucose level in all groups, including the control group, is due to increased sympathetic activity triggered by acidosis and surgical procedures. Many studies have shown that AST and ALT activities increased at the serum level and decreased at the tissue level in animal groups that received alcohol, ED, and both. Mihailović et al 28 showed that AST and ALT activities increased at the serum level and decreased at the tissue level in their experiment with 8 animals, each separated into alcohol and control groups. Munteanu et al 11 reported similar results with 28 Wistar Albino male rats. They divided the animals into 4 groups, as in our study, and after 30 days of treatment, they attached 10% of their weight to their tails and put them into a challenging swimming protocol. They showed that AST and ALT activities decreased at the tissue level and increased at the serum level at the end of the exercise. Due to the ischemia-reperfusion model we established in our study, our results may not be similar to these studies. Energy drinks cause changes in cholesterol levels in longterm use due to the vitamin niacin and caffeine. Many human and animal studies have been published on these substances. Voskoboinik et al 29 published a review on the effects of caffeine on the cardiovascular system in 2018. This study emphasized that short-term consumption of unfiltered coffee increased serum triglyceride, cholesterol, and LDL levels. A meta-analysis of 12 randomized trials, including 1017 people who consumed coffee for an average of 45 days, reported that total cholesterol, triglyceride, and LDL levels increased significantly. Still, HDL levels did not change significantly. 30 These effects were primarily seen in people who consumed more than 6 cups of unfiltered coffee daily and had a poor lipid profile. The effect of alcohol use on the lipid profile varies greatly with the amount and duration of consumption. Wang et al 31 reported a study that included 8 animals in the alcohol and control group; they showed increased cholesterol, HDL, and LDL levels in medium-term alcohol use. Our study found that the alcohol group had increased total cholesterol, HDL, and triglyceride levels but decreased LDL levels compared to the control group. Another substance that affects cholesterol and triglyceride levels is the vitamin niacin added to ED. Niacin can be used alone or in addition to statin therapy to lower the total cholesterol, LDL, and triglyceride levels and increase HDL levels in individuals with impaired lipid profiles and have risk factors for cardiovascular events. Although the Impact on Global Health Outcomes (AIM-HIGH) and The Heart Protection Study 2-Treatment of HDL to Reduce the Incidence of Vascular Events (HPS2-THRIVE) studies have shown that niacin does not reduce cardiovascular events, it is one of the prominent substances in ED marketing today. Munteanu et al 11 administered ED and ethanol to rats separately and together and compared these 3 groups with the control group. They found that total cholesterol levels in all groups were statistically significantly lower than in the control group. 11 Our study found that the ED group had lower LDL levels and higher cholesterol and HDL levels than the control group. We also saw that triglyceride levels increased approximately 1.5 times compared to the control group. We found that triglyceride levels were about 1.7 times higher, and total cholesterol, HDL, and LDL levels were lower in the RA group than in the control group. Munteanu et al 11 published a study on ED and alcohol. They found signs of alcoholic cardiomyopathy (poor arrangement of myofibrils and swollen mitochondria with dilated crystals) in the hearts of alcohol-treated rats. They also observed dilated crystal swollen large mitochondria and abnormal spaces between myofibrils in the ED group, which were thought to be caused by oxidative damage. 11 Mansy et al 32 compared 3 groups of animals given ED at 3 different doses with the control group. They found that serum antioxidant enzyme levels were significantly lower in animals with medium and high ED amounts. Their histopathological examination of the liver and kidney observed congestion and necrosis in the cells and inflammation in the intercellular tissues. 32 Reis et al 4 divided the animals into 6 groups: control, low-dose ED, high-dose ED, ethanol, ethanol and low-dose ED, and ethanol and high-dose ED groups in their animal experiment study. Liver tissue samples of the rats were examined, and the findings of balloon degeneration and lobular inflammation were categorized and evaluated at the end of the 15-day study. As a result, these pathological findings were mostly detected in rats given high-dose ED and high-dose ED with ethanol. In our study, multiple inflammatory cell infiltration and multiple damages were observed in the cardiac muscle cells of animals in the RA group. Abnormal structuring was observed in the vascular endothelium in some parts of the heart walls. In the striated muscle cells, more damaged fibers were observed in the RA group compared to the other groups. Study Limitations Although respiratory acidosis was statistically significant in the ED group than in the control group, performing an experiment using tracheal ventilation would be more accurate. Although we were ensured that all rats had finished their daily drinks, measurement of blood ethanol level could be more appropriate. Also, using electron microscopy would be more useful in histopathological examination. CONCLUSION In conclusion, the consumption of ED and alcohol may cause respiratory acidosis in rats. The high lactate values of animals using ED suggest that they are more affected by ischemia. Observing more damaged fibers in the striated muscles in the ED and alcohol group also supports this situation. More damaged cells in the heart muscle, inflammatory cell infiltration in the connective tissue, and abnormal structuring in the vascular endothelium indicate that long-term ED and alcohol use may cause damage to the heart muscle and endothelium.
2023-01-22T06:16:10.975Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "44ad5dca411a97ae56e33102ac02ba4aedcad5d1", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.14744/anatoljcardiol.2022.2003", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c4103ce0d73ecbbfed7a17c9cf5b277e0c28e0ae", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
215727863
pes2o/s2orc
v3-fos-license
Comparison of two skin temperature assessment methods after the application of topical revulsive products: Conductive iButton data logger system vs contact‐free infrared thermometry Abstract Background Skin temperature assessments comprise conductive and contact‐free techniques. Comparison between conductive data loggers and contact‐free thermometry after the application of revulsive products is scarce. This study aimed to compare iButton data loggers with an infrared thermometer after the application of two revulsive products. Secondly, the relation between skin temperature kinetics with skin's perfusion of microcirculation was investigated. Materials and methods Healthy females (n = 25) were randomly allocated to two groups, representing the products A and B. Skin temperature was measured with “iButtons” and an infrared pistol at baseline and up to 1 hour after application. Skin's perfusion of microcirculation was monitored with a laser speckle contrast imager. Results Baseline “iButton” temperature values were significantly lower compared with infrared pistol values in both groups. After application of the products, skin temperature decreased as recorded with both devices followed by an increase to baseline values when measured with the pistol. The results obtained by the “iButtons” reached values above baseline in both products towards the end of the follow‐up period. A moderate correlation was found between infrared pistol and “iButton” system in product A, with a weak negative correlation between skin's perfusion of microcirculation and temperature devices. For product B, the correlation between the devices was moderate and between skin's perfusion and temperature devices weak and positive. Conclusion Both devices produced similar kinetics, except at baseline, where they may differ as metallic loggers have been insufficiently adapted to skin temperature. Skin's perfusion of microcirculation could not explain skin temperature changes. | INTRODUC TI ON Skin temperature measurement techniques comprise conductive thermocouples, thermistors and telemetry systems as well as contact-free infrared thermometry and imaging. [1][2][3][4] The measurement is challenging even more when sweat or topical products cover the skin surface. [5][6][7] The ingredients of plant-derived revulsive products may induce changes in skin blood flow, affecting skin temperature. [8][9][10][11] To the best of authors' knowledge, to date, no study has compared conductive and contact-free skin temperature measurement methods to perform a continuous observation of the physiological changes induced by revulsive products. Therefore, the aim of this study was (a) to compare skin temperature results of the conductive iButton data logger system with the contact-free infrared pistol at each time point from baseline to 60-minutes follow-up, (b) to investigate skin temperature changes within each device and product, and (c) to measure skin's perfusion of microcirculation to evaluate its relation with skin temperature changes. | Study design and participants This study was approved by the Swiss Cantonal Ethical Committee of Zurich, KEK-ZH ID 2016-01541, in accordance with the Declaration of Helsinki (ICH-GCP). Twenty-six young healthy Caucasian female volunteers were recruited. After written informed consent, the participants were checked for inclusion and exclusion criteria. The included females were non-smokers, aged between 18 and 35 years with healthy skin conditions. They were randomly allocated to one of the experimental groups (A treated with product A or B treated with product B) by drawing lots. The products were applied on pre-defined areas on the lumbar back region. Demographics of the participants and environmental conditions are presented in Tables 1 and 2. | Interventional products Axanova hot gel ® was chosen as product A and Dolor-X hot gel ® as product B. Both products are over-the-counter products in Switzerland. Detailed information on the concentration of the components was not available. | Measurements Skin temperature was conductively assessed with a telemetric metallic thermochronic data logger system (iButton DS1922L-F5, Maxim Integrated Products). The "iButtons" were coded with the appertaining interface on a laptop computer to measure skin temperature in 1-second intervals with the highest achievable resolution of 0.0625°C for 11 bit. 12 After finishing the measurements, they were connected with the appertaining interface on a laptop computer for data collection. Further, skin temperature was measured contact-free by a handheld infrared pistol providing a resolution of 0.1°C (Voltcraft IR 800-20D IR Thermometer). 13 The pistol emits two separate laser light beams and captures the reflecting light by a diode. The appropriate measurement distance rectangularly to the skin surface was reached once both aiming beams united to one light spot. The digitally displayed temperature value was manually transferred on the data sheet. Skin's perfusion of microcirculation measurement was performed as described elsewhere. 14 | Experimental setting Room temperature and humidity (RH) were monitored by a multimeter (Voltcraft MT52) and kept constant between 22.5 to 23.5°C and 39%-40% RH throughout all measurements. The participants were advised to refrain from drinking any caffeine-containing beverages at least 24 hours before the start of the experiment, not to shower, not to apply any body lotion and to avoid any exhaustive exercise prior to the measurements. After arriving in the laboratory, they changed into shorts and unclothed their upper body up to the underwear. Afterwards, the participants laid down in prone position on a therapeutic plinth. One side of the lumbar back was randomly defined as application area. A 10 × 10 cm investigational area was defined as region of interest (ROI) and confined with elastic tape strips. The "iButton" was placed on its foreseen place as shown in Figure 1. Afterwards, the acclimatization period of 20 minutes started where the participants were advised to avoid any movements. Afterwards, baseline measurements started. Firstly, skin's perfusion of microcirculation was assessed followed by skin temperature with the infrared pistol. After completion of the baseline measurements, the "iButton" was removed and the product A or B was applied on the participant pre-defined lower back region. Exactly, 0.5 g (Kern 770 precision scale) 15 of the investigational product A or B was applied with circular movements using the one-finger-glove technique. 16 The participants and the investigators were blinded to the products. After reaching the maximum absorption capacity of the skin, the "iButton" was re-applied and the post-application measurement (T0) started. Follow-up measurements were conducted in 5-minute intervals up to 40 minutes (T5-T40), then in 10-minute intervals up to 1 hour (T50, T60). (baseline, T0, T5, T10, T15, T20, T25, T30, T35, T40, T50, T60) and differences in skin temperature measurement technique ("iButtons," infrared pistol), respectively, as well as the interaction effect between both. One-way Repeated Measures ANOVAs were used to analyse changes over time in skin temperature measurement techniques "iButtons" and infrared pistol, respectively (baseline, T0, T5, T10, T15, T20, T25, T30, T35, T40, T50, T60). Pearson's correlations were performed to assess the relationship between the two skin temperature measurement techniques on the one hand and between skin temperature measurement techniques and skin perfusion on the other hand for product A and B, respectively. | RE SULTS Participants' age, height, weight and BMI, as well as environmental conditions, were comparable in both groups (all P > .05; see Tables 1 and 2). | D ISCUSS I ON The aim of this study was (a) to compare skin temperature results of the conductive "iButtons" with the contact-free infrared pistol at Both methods are regarded as suitable to measure skin temperature after the application of a revulsive product since the average values per time point were similar. Nevertheless, a correlation between both techniques in each product indicates a certain degree of random deviation at the individual level. Skin temperature difference between the observed two measurement methods occurred at baseline with untreated skin conditions. A plausible explanation F I G U R E 2 Skin temperature, measured by the conductive iButton data logger system and the contact-free infrared pistol, with skin's perfusion of microcirculation of product A over time. Legend: ▬• "iButton," ▬■ infrared pistol, ▬▲ skin's perfusion of microcirculation, AU = arbitrary units, † P < .05 infrared pistol within difference compared to baseline, ‡ P < .05 "iButton" within difference compared to baseline, § P < .05 skin's perfusion of microcirculation within difference compared to baseline, ¶ P < .05 between difference "iButton" vs infrared pistol F I G U R E 3 Skin temperature, measured by the conductive iButton data logger system and the contact-free infrared pistol, with skin's perfusion of microcirculation of product B over time. Legend: ▬• "iButton," ▬■ infrared pistol, ▬▲ skin's perfusion of microcirculation, AU = arbitrary units, † P < .05 infrared pistol within difference compared to baseline, ‡ P < .05 "iButton" within difference compared to baseline, ¶ P < .05 between difference "iButton" vs infrared pistol for this result could be that the adaptation time of 20 minutes used was insufficient to equalize the metallic shell cover of the "iButtons" with the skin surface. 17 During the application time of the products, the "iButtons" were placed on a table at room temperature, which might have allowed the "iButtons" to lose temperature and re-adjust to room conditions. Following Pinnagoda et al,18 thermistors should be stored on an untreated blank skin area. In contrary to the current study, they applied the product without rubbing. Therefore, the perfusion kinetics of our study could be related to the rubbing effect induced by the application. Further, they used a total quantity of product that was 93% higher and in a different composition than in the present study. These two factors might explain their long-lasting increase in skin blood flow up to 50 minutes post-application, 11 whereas the findings of our study showed a steady decrease after a (non-significant) peak around 10 minutes. Besides, other studies using menthol gel showed an increase in skin's perfusion of microcirculation. 8,22 Therefore, the composition of the products used and the small quantity of ointment applied in comparison with former studies might explain the differences between the results on skin's perfusion of microcirculation. The authors like to address some suggestions for upcoming studies. Randomizing the ROIs would allow to control for the possible regional micro-vessel density differences, 23 maybe influencing skin temperature. Secondly, the authors suggest adjusting the "iButtons" longer than 20 minutes and to store them on a neighbouring skin region. 18 | CON CLUS ION Conductive iButton data logger system and contact-free infrared thermometry give similar kinetics of skin temperature after the application of revulsive products. Contact-free infrared thermometry might be more suitable compared with the conductive iButton data logger system in terms of initial adaptation time to skin temperature and covering induced disturbances. ACK N OWLED G EM ENTS Thanks to the "Thim van der Laan" foundation for the financial support.
2020-04-11T13:07:00.361Z
2020-04-09T00:00:00.000
{ "year": 2020, "sha1": "27caa03b72f63943db1f4668636c1a46ea3bdb14", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/srt.12847", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "e9cc5543c10e959f4cea7b23dc81528ad1981eb5", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
3492052
pes2o/s2orc
v3-fos-license
Effect of Narrow Spectrum Versus Selective Kinase Inhibitors on the Intestinal Proinflammatory Immune Response in Ulcerative Colitis Article first published online 22 April 2016. P rotein kinases are enzymes that catalyze the transfer of a phosphate group from adenosine triphosphate (ATP) to a serine, threonine, or tyrosine residue in their substrates. The protein kinase inhibitor imatinib was first approved for clinical use in 2001 1 ; since then, protein kinases have become increasingly attractive therapeutic targets, with over 150 drugs in clinical trials and 28 approved by the Food and Drug Administration. 2 Currently, most kinase inhibitors have been approved for use in clinical oncology. However, many of the major classes of immune cell receptors, including T-cell receptor, B-cell receptor, natural killer-cell receptor, and Fc receptors, depend on protein kinases to transduce intracellular signals, 3 highlighting the therapeutic potential of kinase inhibitors to treat inflammatory diseases. Inflammatory bowel disease (IBD) is an umbrella term primarily used to describe two conditions: Crohn's disease and ulcerative colitis (UC). Both diseases are characterized by recurrent intestinal inflammation and epithelial injury believed to be initiated by an inappropriate response to the gut microbiota. 4,5 Interplay between luminal antigens, infiltrating leukocytes, the epithelium, and stromal cells results in excess production of proinflammatory cytokines, such as tumor necrosis factor (TNF)-a, interleukin (IL)-1b, IL-6, and IL-8. 6 Cytokine production by these cells is controlled by a diverse array of intracellular kinases. P38a, a member of the P38 mitogen-activated protein kinase (MAPK) family, has been reported to be the most important MAPK isoform in inflammatory cells in the mucosa of patients with IBD. 7,8 Being a downstream kinase, P38a is a point of convergence for multiple inflammatory signaling pathways and therefore has garnered much interest as a therapeutic target. In a phase III clinical trial, treatment of patients with active Crohn's disease with BIRB-796, a selective P38 MAPK inhibitor, was associated with a dose-dependent decrease in serum C-reactive protein after 1 week. However, compared to placebo, no clinical efficacy endpoints were reached at any time point in the study. 9 One explanation of the lack of efficacy of BIRB-796 may be redundancy in the signaling pathways, overriding the need for P38a in the inflammatory cascade. Potentially, this could be avoided by targeting more upstream signaling nodes and possibly several different targets simultaneously. 10 Src family kinase (SFK) members Lck and Fyn are among the first upstream kinases to be engaged upon T-cell receptor activation, and spleen tyrosine kinase (Syk) is engaged early in B-cell receptor and Fc receptor signaling. 11 Neither SFK nor Syk has been a clinical target for IBD; however, a selective Syk inhibitor, fostamatinib, has been shown to decrease mucosal damage in a mouse acetic acid-induced colitis model, 12 and the key role T cells have in IBD pathogenesis indicates merit in targeting SFK members. Furthermore, fostamatinib is a particularly promising treatment of chronic lymphocytic leukemia 13 ; however, it achieved only limited efficacy in a phase III rheumatoid arthritis trial, 14 indicating that for inflammatory disorders of more complex pathophysiology, single target inhibition may not be sufficient for clinical efficacy. Polypharmacology is emerging as the next paradigm of drug discovery with a belief that targeting multiple signaling nodes with a single drug may lead to improved efficacy, particularly in multifaceted diseases such as IBD and rheumatoid arthritis. 15 Initially, it was believed that a high degree of kinase selectivity would be critical for an acceptable safety profile with concerns that systemic multikinase inhibition might increase the risk of an adverse safety profile. This has, however, been addressed through the proposal of topical delivery of kinase inhibitors. 16,17 For example, pan-Janus kinase inhibitors are in preclinical development as inhaled therapy for severe asthma. 18 A class of narrow spectrum kinase inhibitors (NSKI) has been developed, which target P38a, SFK, and Syk. Targeting these kinases addresses the narrow window of activity achieved with truly selective inhibitors and at the same time avoids potential signaling redundancy as observed when targeting only downstream signaling nodes. To et al 19 recently described the superior effects of an NSKI, RV1088, over selective kinase inhibitors in rheumatoid arthritis synovial membrane cells. In this study, we investigated how another NSKI, TOP1210, compares to selective kinase inhibitors, both in terms of breadth of activity and pharmacology (efficacy and potency), across different inflammatory cell types, in selected human cell assays and in tissue explants from patients with UC. Kinase Assays A commercial 384-well fluorescence resonance energy transfer (FRET)-based kinase assay for Src, P38a, and Syk kinases was used to measure inhibitory activity of compounds. Compound or vehicle (dimethyl sulfoxide [DMSO], 1% v/v) was incubated with kinase of interest (P38a, 20 ng/mL; Src, 750 ng/mL; or Syk, 500 ng/mL) for 2 hours. Z-lyte peptide (Invitrogen, Paisley, United Kingdom), selective for an individual kinase, was added (Ser threonine 4 peptide for P38a, Tyr2 peptide for Src and Syk). ATP, 10 mM, 200 mM, or 15 mM, for P38a Src and Syk, respectively, was then added. In addition, inactive MAPK-activated protein kinase (MAPKAP)2 (180 ng/mL) was added to the P38a reaction mix. After 1 hour incubation, development reagent was added followed by another 1 hour incubation. The reaction was terminated and read using a fluorescence microplate reader. Patients and Samples Peripheral venous blood from healthy volunteers was collected by venepuncture and anticoagulated with ethylenediaminetetraacetic acid (EDTA). Perendoscopic biopsies or surgical specimens were taken from macroscopically inflamed colonic areas of patients with UC. Diagnosis of UC was made according to clinical and histologic criteria and confirmed by endoscopy. Each patient or volunteer who took part in the study was recruited after appropriate local ethics committee approval, and informed consent was obtained in all cases. LPS Activation of Monocyte-derived Macrophages CD14 + cells were isolated from human PBMCs by positive selection using magnetic beads. Cells were resuspended in RPMI containing 10% FBS and cultured (378C, 5% CO 2 ) in the presence of human recombinant granulocyte-macrophage colony-stimulating factor (100 ng/mL) for 12 to 14 days. Cells were harvested and resuspended (2 · 10 5 cells per mL), dispensed into 96-well plates (100 mL/well) and allowed to equilibrate (2 hours). Test compound (0.1-1000 ng/mL) or vehicle (DMSO 0.5% v/v) was incubated with cells (2 h) before stimulation with LPS (10 ng/mL) for 24 hours. Supernatants were collected for IL-8 and TNF-a analysis. In separate experiments, the potential cytotoxic effects of compounds were assessed by incubation (378C, 24 h) of compound or vehicle with HT29 cells seeded as above. Viability of cells was assessed by addition of Presto blue and reading fluorescence (560 nm/590 nm) after 10 to 30 minutes. At the chosen concentrations of test compounds used in these studies, no significant decrease in viability was detected (data not shown). Myofibroblast Isolation and Culture Intestinal myofibroblasts were isolated from inflamed resected UC mucosa. The mucosa was dissected from the submucosa with a scalpel and cut into 1 mm square pieces. These were then cultured at 378C in a humidified CO 2 incubator in D-MEM medium, supplemented with 20% FBS, 100-U/mL penicillin, 100 mg/mL streptomycin, 50 mg/mL gentamicin, and 1 mg/mL amphotericin. Established colonies of myofibroblasts were seeded into 25 cm 2 culture flasks and cultured in DMEM medium, supplemented with 20% FBS and antibiotics. At confluence, the cells were passaged using trypsin-EDTA in a 1:2 to 1:3 split ratios. Cells were grown to at least passage 4 before use. Subconfluent monolayers of myofibroblasts were seeded (5 · 10 4 cells/well) into 12-well plates and cultured overnight at 378C, 5% CO 2 before being stimulated with 20 ng/mL recombinant human TNF-a and either vehicle (DMSO, 0.5% v/v) or test compound (TOP1210, 0.000001-1 mg/mL; BIRB-796, 0.01-10 mg/mL; dasatinib, 0.01-10 mg/mL; or BAY-61-3606, 0.01-10 mg/mL). After 24 hour culture, supernatants were collected for measurement of IL-6 and IL-8 by enzyme-linked immunosorbent assay (ELISA). Results were expressed as mean percent inhibition 6 SEM compared with TNF-a + DMSO. Enzyme-linked Immunosorbent Assay All cytokines were measured using commercial ELISA kits according to manufacturers' instructions. Statistical Analysis Data were analyzed using GraphPad Prism (GraphPad Software, San Diego, CA) using the one-way analysis of variance followed by the Dunnett's multiple-comparisons test or the Kruskal-Wallis test followed by the Dunn's multiple-comparisons test. A level of P , 0.05 was considered statistically significant. Inhibition of Key Kinases The inhibitory activity of TOP1210 and selective kinase inhibitors was assessed in an ATP-dependent substrate phosphorylation assay ( Table 1). The selective kinase inhibitors chosen for this study were BIRB-796 (p38 MAPK inhibitor), dasatinib (SFK inhibitor), and BAY-61-3606 (Syk inhibitor). Inhibition of Src was considered to be representative of effects on SFK because of the high level of homology within this kinase family. TOP1210 treatment achieved potent, concentration-related inhibition of p38a, Src, and Syk kinase activities with IC 50 values of 65, 10, and 17 nM, respectively. In contrast, BIRB-796 and BAY-61-3606 only inhibited their respective kinase targets. Dasatinib potently inhibited Src kinase activity (IC 50 , 6 nM) but was also a weak inhibitor of p38a (IC 50 , 378 nM). TOP1210 potency was comparable to (IC 50 , within 5-fold) or, in the case of BAY-61-3606, greater than that of the selective kinase inhibitors at their respective target kinase. Effect of TOP1210 and Selective Kinase Inhibitors on Innate, Adaptive, and Epithelial Cellular Responses Mucosal inflammation involves the interplay of innate and adaptive immune mechanisms with the epithelium. As a model of innate immunity, PBMCs were stimulated with LPS, leading to IL-8 release (15,658 6 1500 pg/mL, mean 6 SEM). TOP1210 achieved concentration-dependent (0.1-1000 ng/mL) and maximal (100%) inhibition of IL-8 release (Fig. 1A) with an IC 50 value of 1.9 nM (Table 2). In contrast, both BIRB-796 and dasatinib failed to achieve 50% inhibition at any concentration up to the maximum tested (1 mg/mL). BAY-61-3606 achieved a maximum of 83% inhibition but with a potency (IC 50 607 nM, Table 2) some 300-fold weaker than TOP1210. A similar profile was observed in LPS-stimulated primary human macrophages with TOP1210 demonstrating superior activity over the selective inhibitors, achieving potent, maximal inhibition of IL-8 (Fig. 1B) and TNF-a release (Fig. 1C) with IC 50 values of 2.2 and 3.3 nM, respectively ( Table 2). BIRB-796 and BAY-61-3606 failed to achieve 50% inhibition of either IL-8 or TNF-a at any concentration up to the maximum tested (250 ng/mL). Dasatinib achieved 87% inhibition of TNF-a release but was approximately 30-fold weaker (IC 50 , 52 nM, Table 2) than TOP1210 and achieved less than 50% inhibition of IL-8 release. To model the adaptive immune response, PBMCs were stimulated with anti-CD3 and anti-CD28 to activate the T-cell population. This stimulation led to release of IFN-g (16,146 6 5926 pg/mL, mean 6 SEM) and IL-2 (39,742 6 9652 pg/mL, mean 6 SEM). TOP1210 achieved maximal inhibition of IFN-g release ( Fig. 2A) with an IC 50 of 2.1 nM ( Table 2). As expected, the SFK selective inhibitor, dasatinib, was also a potent inhibitor of IFN-g release with similar potency (IC 50 , 4.0 nM, Table 2) to that of TOP1210. BIRB-796 was inactive in the assay, and BAY-61-3606, although achieving maximal efficacy, was 120-fold weaker (IC 50 , 247 nM) than TOP1210. In the IL-2 release assay, a very similar profile was observed with the selective kinase inhibitors (Fig. 2B). TOP1210, however, was 18-fold weaker in potency as an inhibitor of IL-2 release (IC 50 , 37 nM, Table 2) compared with IFN-g release. Inflammation of the epithelium is a hallmark of many mucosal disorders. Because of the difficulty of culturing primary human colonic cells, HT29 cells, a human intestinal epithelial cell line, has been used as a model of human colonic epithelium. Stimulation of these cells with IL-1b leads to IL-8 release levels of 1736 6 93 pg/ mL (mean 6 SEM). TOP1210 potently (IC 50 , 1.8 nM, Table 2) inhibited IL-8 release with maximal efficacy (Fig. 3). In contrast, all three selective kinase inhibitors were only weakly active in the assay with none achieving greater than 50% inhibition. In a propidium iodide, flow cytometry-based assay, TOP1210 was shown not to effect cell viability at any of the concentrations tested in the cellular assays (data not shown). In summary, TOP1210 was potent and highly efficacious across all cellular models of innate, adaptive, and epithelial responses. In contrast, the selective kinase inhibitors of P38a, SFK, and Syk were only active in some assays and for the most part much less potent than TOP1210. Effect of TOP1210 and Selective Kinase Inhibitors on Proinflammatory Cytokine Release by UC Myofibroblasts Myofibroblasts isolated from inflamed UC mucosa were used to compare the anti-inflammatory activity of TOP1210 to that of selective kinase inhibitors. Vehicle (DMSO) treated myofibroblasts, after TNF-a stimulation, released IL-6 and IL-8 levels of 14,740 6 6518 pg/mL and 49,363 6 15,694 pg/mL (means 6 SEM), respectively. TOP1210 achieved concentration-dependent inhibition of both IL-6 (IC 50 , 2.2 ng/mL) and IL-8 (IC 50 , 2.1 ng/mL) production by UC myofibroblasts (Fig. 5, Table 3). BIRB-796 required 1 mg/mL to significantly reduce both IL-6 and IL-8 release with a maximum inhibition of only 70% achieved. Moreover, dasatinib (0.1-10 mg/mL) significantly reduced IL-6 release in a concentration-dependent manner by approximately 36% to 60%, and 10 mg/mL dasatinib significantly inhibited IL-8 by approximately 70%. Finally, BAY-61-3606 was the most active of the selective kinase inhibitors in this assay, significantly reducing both IL-6 and IL-8 release with efficacy of over 90%, however, at a weaker potency compared with TOP1210. Although the selective kinase inhibitors reduced both IL-6 and IL-8 release from UC myofibroblasts, in all cases, they were significantly less potent than TOP1210 (Table 3), and in most cases, maximum efficacy was much reduced compared with TOP1210 (Fig. 5). Effect of TOP1210 and Selective Kinase Inhibitors Alone or in Combination with Proinflammatory Cytokine Release by UC Biopsies Biopsies from patients with IBD can be used as an inflammatory model of disease 21 and have been shown here to spontaneously release high levels of proinflammatory cytokines (IL-1b, 459 6 124 pg/mL; IL-6, 18,428 6 4847 pg/mL; and IL-8 67,155 6 13,377 pg/mL [means 6 SEM]). TOP1210 and selective kinase inhibitors at 1 mg/mL were profiled in an organ culture assay using inflamed colonic biopsies from patients with UC. TOP1210 significantly inhibited IL-1b, IL-6, and IL-8 release from inflamed UC biopsies by 80% to 90% (Fig. 6A, B). BAY-61-3606 and BIRB-796 also significantly inhibited release of all three cytokines (50%-65% inhibition), but with a reduced effect compared with TOP1210. Dasatinib did not have any significant effect on IL-1b, IL-6, and IL-8 release from the UC biopsies. When the three selective kinase inhibitors were combined, the level of inhibition produced against all 3 cytokines was similar to that for TOP1210 (Fig. 6A). Moreover, TOP1210 inhibited IL-1b, IL-6, and IL-8 release by UC biopsies in a concentrationdependent manner (Fig. 6B). DISCUSSION Selective kinase inhibitors to date have been disappointing in the treatment of inflammatory disorders, particularly in rheumatoid arthritis and IBD. 9,14 Consequently, attention has turned to multikinase inhibition in an attempt to improve efficacy by targeting a broader range of cell types and cytokines. In this study, TOP1210 has been compared to selective kinase inhibitors (BIRB-796, dasatinib, and BAY-61-3606) in a range of inflammatory cell assays and in inflamed biopsies from patients with UC. TOP1210 potently inhibits p38a, SFK, and Syk kinases and is comparable or higher in potency than each of the selective inhibitors at their relevant kinase. The limited anti-inflammatory profile of each of the selective kinase inhibitors can, in part, be explained by both the relative expression of their target kinases in selected cell types and the participation of specific kinases in signaling pathways associated with the stimuli used in the cell assay. P38a and the Src family members, Src, Fyn, and Yes, are ubiquitously expressed in a broad range of stromal and hematopoietic cells, whereas the expression of other SFK members (Lck and Hck) and Syk is restricted to hematopoietic cells. It is, perhaps, unsurprising that inhibition of Syk by BAY-61-3606 results in poor efficacy in the innate cell assays given that the LPS stimulation acts through TLR4 signaling pathway, which does not involve Syk. Conversely, although P38a is ubiquitously expressed and involved in most signaling cascades tested, its selective inhibition by BIRB-796 failed to achieve even 50% inhibition in any of the cellular assays. P38a is involved in downstream signaling, and redundancy at this level may explain lack of efficacy, which has indeed been alluded to an explanation of the disappointing results with BIRB-796 in clinical studies. 10 In macrophages, the effects of SFK inhibition with dasatinib appeared to be cytokine dependent with relatively potent inhibition of TNF-a achieved compared with the poor activity against IL-8 release. In contrast, the multikinase inhibitory profile of TOP1210 results in potent inhibition of both IL-8 and TNF-a in these innate responses from monocytes and macrophages, respectively. If we consider the adaptive responses, the importance of SFK members Lck and Fyn in T-cell receptor signaling helps to explain the potent and efficacious inhibition achieved with dasatinib. The potent activity of TOP1210 in these T-cell assays is therefore most likely to be directly attributed to its potent SFK inhibition. Collectively, TOP1210 achieves potent inhibition across cellular models of both innate and adaptive immunities, highlighting the broad acting anti-inflammatory potential of the compound compared with selective kinase inhibition. Interestingly, TOP1210 activity is far superior over the individual selective inhibitors suggesting that there is added benefit in simultaneous targeting of P38a, SFK, and Syk, both in efficacy and breadth of effects across cell types particularly in the case of IL-8 release. Disruption of the epithelium is central to the development of mucosal disorders, particularly in UC. As a consequence of epithelial disruption, soluble mediators are released, for example, IL-8, serving to recruit circulating neutrophils into the site of inflammation. Interestingly, we demonstrate that the selective kinase inhibitors are only weakly active in the IL-8 release assay, yet TOP1210 is highly potent and efficacious. A similar trend was observed in the IL-8 release assay in monocytes, so the disparity between TOP1210 and the selective inhibitors may be a cytokine, rather than cell specific phenomenon. A range of inflammatory cytokines is known to be key mediators of IBD, with IL-1b, IL-6, IL-8, and IL-2 all being elevated in tissues from patients with IBD. 6 IL-8, however, seems of particular importance in UC as serum levels of IL-8 correlate with disease activity. 22 In this study, we have demonstrated that TOP1210 potently inhibits IL-8 secretion from monocytes, epithelial cells, UC myofibroblasts, and UC biopsies, yet interestingly, the selective kinase inhibitors individually were, at best, only weakly active. To investigate this further, the IL-8 release assay in HT29 epithelial cells was used to study the effects of selective inhibitor combinations. Individually, the selective kinase inhibitors achieve poor inhibition. However, combination studies demonstrate that the effects of inhibiting p38a with BIRB-796 can be augmented in a more than additive manner by concomitant inhibition of SFK and/or Syk using the appropriate selective kinase inhibitors. The more than additive inhibitory effects are best demonstrated when considering concentrations that, individually, are without inhibitory effect. On combination, these concentrations bring about augmented inhibition, albeit limited (approximately 30%). The more than additive effects of noninhibitory concentrations of dasatinib and BAY-61-3606 result in both increased potency (as seen by leftward shifts of BIRB-796 concentration-effect curve) and improved efficacy of BIRB-796. These combination studies demonstrate that simultaneous inhibition of P38a, SFK, and Syk is a key to achieving highly potent inhibitory effects and suggest that the effects achieved in HT29 cells with TOP1210 are most likely mediated through the inhibition of the optimized targets P38a, SFK, and Syk. In this investigation, IL-1b was used as a stimulus of HT29 cells; however, in UC, a wide range of inflammatory stimuli may be acting on the epithelial cells, with numerous pathways being activated and more kinases being involved. In these circumstances, multikinase inhibition may be even more beneficial. Although extensive combination experiments could not be performed with UC biopsies because of limited availability of tissue, the data generated with the organ culture model support the more than additive effects observed in the combination studies in HT29 cells. The broad inhibitory actions of TOP1210 across a range of cell types translated to potent and efficacious effects on cytokine release from inflamed UC colonic biopsies. TOP1210 inhibited, in a concentration dependent manner, all the cytokines measured, namely, IL-1b, IL-6, and IL-8, and had superior efficacy compared with the selective kinase inhibitors at equivalent concentrations. UC is a complex disease characterized by recurrent episodes of colonic inflammation involving immune and nonimmune cells, and also the colonic epithelium. 23 In addition to a T-cell response, the inflammatory process in UC involves gross epithelial cell changes, and macrophage and neutrophil infiltration driven by an array of inflammatory cytokines. 5 Many approaches targeting a single cytokine or cell type seem to have limited efficacy, or development has been stopped because of toxic effects. 6 Conversely, a therapeutic approach that targets multiple cellular pathways and mechanisms is more likely to be effective. 24 These studies have shown that the NSKI TOP1210 leads to a broad profile of inflammatory cell inhibition with greatly enhanced effects over those of selective kinases. Recently, tofacitinib, an oral pan-Janus kinase inhibitor, has been effective in a clinical trial in UC. 25,26 Primarily targeting T cells, this has shown promise but has generally been limited in dose because FIGURE 6. TOP1210 is superior to selective kinase inhibitors alone in downregulating proinflammatory cytokine release from UC biopsies. TOP1210 activity is comparable to a combination of all three selective inhibitors and reduces proinflammatory cytokine release from UC biopsies in a concentration-dependent manner. Effect of 1 mg/mL TOP1210, 1 mg/mL BIRB-796 (BIRB), 1 mg/mL dasatinib (DAS), and 1 mg/mL BAY-61-3606 (BAY), or the combination of the 3 selective kinase inhibitors (A) or TOP1210 (0.001-1 mg/mL) (B) on the release of IL-1b, IL-6, and IL-8, expressed as mean percent inhibition compared with control (DMSO; DMSO, 0.5% v/v), in organ culture supernatants of inflamed colonic biopsies from patients with UC. Bars represent mean 6 SEM of at least 3 independent experiments. *P , 0.05 versus DMSO; **P , 0.005 versus DMSO; ***P , 0.0005 versus DMSO. of toxic side effects, in the same manner as other systemic kinase inhibitor approaches. Nevertheless, there may be opportunities to capitalize on the promise shown, for example, by further optimizing targets or by restricting systemic exposure through topical administration. In summary, selective kinase inhibitors demonstrate limited efficacy and potency and individually have activity only in selected cell types. In contrast, TOP1210, through multikinase inhibition, demonstrates potent, efficacious, and broad inhibitory activity in UC tissues and across a range of cell types including epithelial cells, innate, and adaptive immune cells. These studies suggest that the inhibition of multiple kinases, either though combined selective kinase inhibitors or by NSKI TOP1210, results in more than additive anti-inflammatory effects. The broad efficacy profile of TOP1210 offers significant advantages over existing selective kinase approaches and potentially offers a much improved therapeutic benefit in IBD.
2018-04-03T04:57:49.904Z
2016-04-22T00:00:00.000
{ "year": 2016, "sha1": "f627ef0a2255c3bcbfd059399c5f2caba298db92", "oa_license": "CCBYNCND", "oa_url": "https://academic.oup.com/ibdjournal/article-pdf/22/6/1306/23404485/ibd1306.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "f627ef0a2255c3bcbfd059399c5f2caba298db92", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
228925977
pes2o/s2orc
v3-fos-license
Evaluation of Herbicides and their Mixtures for Control of Broad Leaf Weeds in Wheat and their Economics Wheat is the main winter cereal crop of north-west India and in Haryana state. Weeds are considered to be the most distorted of crop production and account for ~1/3rd of total losses caused by all pests. Among numerous approaches have been in practice for handling the problem of weed infestation, chemical weed control seems indispensable and has proved efficient in controlling weeds. In order to evaluate the herbicides and their mixtures for control of broad leaf weeds in wheat and their economics, a field experiment was conducted at Research Farm area of CCS Haryana Agricultural University, Hisar during growing season of rabi 2018-19. Results promised that among herbicides and their mixtures aclonifen 500 + diflufenican 100 SC @ 1750 + 1750 g/ha closely followed by halauxifen-methyl + florasulam + carfentrazone + surfactant @ 24.99 + 50 + 750 g/ha with maximum weed control efficiency which severely reduced density and dry weight of broad-leaf weeds (Chenopodium album, Rumex dentatus, Anagallis arvensis, Medicago denticulate, Melilotus indicus and Lathyrus aphaca), while poor weed control treatments were recorded with application of 2,4-D NA (80 WP) and 2,4-D Ester (38 EC) @ 625 and 1316 g/ha. Hence highest net returns (Rs. 65,733 ha -1 ) and B: C (1.82) recorded by application of aclonifen 500 + diflufenican 100 SC which were 0.13, 4.6 and 3.4 percent higher than weed free and 47.1, 193.7 and 41.1 percent higher than weedy check, respectively. INTRODUCTION Wheat is the main winter cereal crop of northwest India and in Haryana state. The area, production and productivity of wheat in India is 29.58 m ha, 99.7 m tones and 3370.5 Kg/ha, respectively. Weeds are the most omnipresent class of pests that interfere with crop plants through competition and allelopathy, resulting in direct loss to quantity and quality of the product (Gupta, 2004) and indirectly increasing production costs including costs of labor, equipment, chemical and other management input (Singh et al., 2011a). The weed flora of wheat consists of both grassy and broad leaf weeds and if uncontrolled, they interfere with crop growth by competing for available nutrient, light and water (Jeet et al., 2010). Weeds are considered to be the most distorted of crop production and account for ~1/3rd of total losses caused by all pests (Chhokar et al., 2012). Major weeds associated with wheat are Phalaris minor, Avena spp., Chenopodium album, Melilotus spp., Anagallis arvensis, Vicia sativa, Lathyrus aphaca and Rumex dentatus. In recent years, a new species Rumex sp. has emerged as serious problem in irrigated wheat eco-system (Singh et al., 2011b). In India it has been estimated that out of total yield losses caused by the pests in wheat, weeds account ~33% and extent of yield reduction largely depends on growth and behavior of individual weed species in relation to agroecological condition. Numerous approaches have been in practice for handling the problem of weed infestation such as hoeing, weeding, tillage, harrowing, crop rotation biological and chemical control. Chemical weed control seems indispensable and has proved efficient in controlling weeds (Kahramanoglu & Uygur, 2010) and hence currently about two-third, by volume of the pesticides used worldwide in agricultural production are herbicides. Indiscriminate use of herbicides for weed control during the past few decades has resulted in serious ecological and environmental problems, such as resistance, shifts in weed populations that are more closely related to the crops that they infest, minor weeds becoming dominant Heap (2007) and greater environmental and health hazards Rao (2000). Continuous application of a similar herbicide or use of lower than recommended dose led to development of herbicide resistance (Yadav et al., 2013). Herbicides with differential selectivity can be applied sequentially, but it involves application in two rounds, resulting in enhancing the cost. Therefore, mixing two different herbicides and applying them simultaneously widens the spectrum of weedcontrol, saves time, application cost and application rate. Therefore, a need remains to evaluate new herbicides with different modes of action to tackle the ever increasing problem of complex weed flora. Keeping these points in view, it was planned to carry out a field experiment on evaluation of herbicides and their mixtures for control of broad leaf weeds in wheat and their economics. Sowing of variety WH 1105 was done by seed drill and machine as per treatments at 5-6 cm depth using 100 kg seed ha -1 and layout was performed. Fertilizer (NPK) was applied based on recommended dose. Nitrogen was applied at the rate of 150 kg ha-1 in two splits i.e. ½ at sowing and ½ at first irrigation while full dose of Phosphorus (P2O5) and Potassium (K2O) were applied at the time of sowing at the rate of 60 kg ha-1 each. Phosphorous was applied through di-ammonium phosphate. The amount of nitrogen after deducting from the availability of diammonium phosphate was applied in the form of urea. Potash was applied through muriate of potash. Herbicides of particular doses were sprayed alone or tank mixed by knap sack sprayer fitted with flat fan nozzle with 500 liter water per hectare after 35 DAS. Other cultural practices were followed as per requirement of the treatment and crop according to recommended package of practice. Observations related to weed density, weed dry matter and weed control efficiency were recorded adopting the standard procedure at 30 DAS, 30 and 60 DAT and at maturity per mrl, the results were statistically analyzed. The density of broad leaf weeds was determined by quadrate method (Misra & Puri, 1954). The quadrate (1.0 m -2 ) was thrown randomly at a place in each plot at 30 days after sowing and 30 and 60 days after spraying and at harvest. The weeds inside the quadrate were counted and the average of two quadrates was converted to plants m -2 . The weeds present within the quadrate from a place selected at random from each plot were taken for dry matter accumulation at different interval of observation taken. Weed control efficiency was calculated as per formula given below: ( ) Where, W 2 = Dry weight of weeds in weedy plot W 1 = Dry weight of weeds in treatment plot RESULTS AND DISCUSSION Major broad-leaf weed flora observed during the crop season in the experimental plots comprised Chenopodium album, Rumex dentatus, Anagallis arvensis, Medicago denticulate, Melilotus indicus and Lathyrus aphaca various herbicidal treatments exerted significant effect density and dry weight of weeds, weed control efficiency and its economics. Effect on density of broad-leaf weeds Highest density of broad-leaf weeds were observed in weedy check plot as weeds grow luxuriously and uninterrupted in the absence of any weed control practices throughout the crop growing season. Weed density in weedy check plot were significantly higher in comparison to other weed control treatments. Similar results were reported by Shehzad et al. (2012) and Hashim et al. (2002) who found that maximum weed population recorded in the weedy check plot in an herbicide trial on wheat. The number of broad-leaf weed population per mrl significantly reduced after application of herbicidal treatments which recorded at 30 and 60 DAT and at maturity. All herbicides and their mixtures were found effective in controlling broad-leaf weeds in wheat field Table 1 and Fig 1. Aclonifen 500 + diflufenican 100 SC was recorded with significantly lower weed density compared to different herbicides and their mixtures except carfentrazone-ethyl 40 DF, metsulfuronmethyl 10 % WP + carfentrzone 40 DF + 0.2 % surfactant, 2,4-D Na/Ester + carfentrazone and halauxifen-methyl + florasolam + carfentrazon + surfactant at all the stages of observation up to maturity. Aclonifen 500 + diflufenican 100 SC was recorded with 4.53, 3.41 and 1.99/m 2 at 30, 60 days after treatment and at maturity, respectively with 56.7, 34.9 and 52.2 percent lower than weedy check respectively at 30, 60 DAT and at maturity and it was closely followed by halauxifen-methyl + florasolam + carfentrazon + surfactant. Chhokar et al. (2007) reported similar results and described that herbicide mixture effectively controlled weeds compared to weedy check. Punia et al. (2017), Barla et al. (2017) and Meena et al. (2017) reported similarly that the superiority of tank mix application of broad-leaf weeds and grassy weeds suppressing herbicides over their individual applications in reducing total weed density. Density of Convolvulus arvensis were significantly reduced by treatments which contain carfentrazone i. e. Carfentrazone-ethyl 40DF @ 50 g/ha, metsulfuron-methyl 10 % WP + carfentrazone 40 DF + 0.2 % surfactant, 2,4-D Na/Ester + carfentrazone and halauxifen-methyl + florasolam + carfentrazon + surfactant compared to other treatments. Effect on dry matter accumulation (DMA) of broad-leaf weeds Different weed control treatments exerted significant influence on dry matter accumulation (DMA) in comparison to weedy check after the application of herbicidal treatments. At 30 DAS stages regarding dry matter accumulation of total broad-leaf weeds non significant differences were recorded among different herbicide treatments compared to weedy check, because herbicide treatments were imposed at 35 DAS stage. At 30 days after treatment application dry matter accumulation of weeds were significantly affected by various herbicide treatments (Table 1). Aclonifen 500 + diflufenican 100 SC was recorded with significantly lower dry matter accumulation of bread-leaf weeds compared to different herbicides and their mixtures except carfentrazone-ethyl 40 DF, metsulfuron-methyl 10 % WP + carfentrzone 40 DF + 0.2% surfactant, 2,4-D Na/Ester + carfentrazone and halauxifen-methyl + florasolam + carfentrazon + surfactant at all the stages of observation up to maturity. Aclonifen 500 + diflufenican 100 SC was recorded with 3.56, 2.38 and 1.81/m 2 at 30, 60 days after treatment and at maturity, respectively with 53.3, 45.7 and 47.5 percent lower than weedy check respectively at 30, 60 DAT and at maturity and it was closely followed by halauxifen-methyl + florasolam + carfentrazon + surfactant. Population and dry matter accumulation of weed species grassy and broad-leaf weeds were reduced drastically with the use of herbicides (Sharma et al., 2018). Fig. 1: Density of broad-leaf weeds/m 2 Supporting findings were also reported by Narial et al. (2008) and Meena and Singh (2011). Similar founding reported by Zhang et al. (1995) that application of two or more herbicides simultaneously either using post mixtures or by mixing different herbicide products before the application is very common approach in intensive agriculture. Dry weight of Convolvulus arvensis were reduced significantly by treatments which contain carfentrazone, i. e. Carfentrazone-ethyl 40DF @ 50 g/ha, metsulfuron-methyl 10 % WP + carfentrazone 40 DF + 0.2 % surfactant, 2,4-D Na/Ester + carfentrazone and halauxifen-methyl + florasolam + carfentrazon + surfactant compared to other treatments. Weed control efficiency All the herbicides and their mixtures recorded with higher weed control efficiency compared to weedy check plot at all the observation stages up to maturity (Table 2). Similar results were reported by Shehzad et al. (2012) and Hashim et al. (2002) who found that maximum weed population recorded in the weedy check plot in an herbicide trial on wheat. Chhokar et al. (2007) reported similar results and described that herbicide mixture effectively controlled weeds compared to weedy check. Among the herbicides and their mixtures, Aclonifen 500 + diflufenican 100 SC was recorded with higher weed control efficiency closely followed by halauxifen-methyl + florasolam + carfentrazon + surfactant at all observation stages, while application of 2,4-D Na (80 WP) and 2,4-D Ester (38 EC) was recorded with lower weed control efficiency. Aclonifen 500 + diflufenican 100 SC was recorded with higher weed control efficiency 79.4, 73.4 and 71.1 percent at 30, 60 DAT and at maturity, respectively which were 79.4, 73.4 and 71.1 percent higher than weedy check plot. Economics The economics of various treatments had been estimated for comparison and to find out the most economical herbicide treatment for control of broad-leaf weeds in wheat. Among all the herbicides and their mixtures tested in study (Table 2), aclonifen 500 + diflufenican 100 SC @ 1750 + 1750 g/ha was recorded with higher economics closely followed by halauxifenmethyl + florasulam + carfentrazone + surfactant @ 24.99 + 50 + 750 g/ha. Aclonifen 500 + diflufenican 100 SC @ 1750 + 1750 g/ha was recorded with higher cost of cultivation (Rs. 79,915 ha -1 ) which was 3.7 percent lower than weed free and 4.3 percent higher than weedy check. Among the different herbicides and their mixtures, application of aclonifen 500 + diflufenican 100 SC @ 1750 + 1750 g/ha was recorded with maximum gross return (Rs. 145,648 ha -1 ), net return (Rs. 65,733 ha -1 ) and B:C (1.82) which were 0.13, 4.6 and 3.4 percent higher than weed free and 47.1, 193.7 and 41.1 percent higher than weedy check, respectively. Similar findings were given by Ashrafi et al. (2009), who reported that broad spectrum herbicides gave maximum net return in wheat and minimum net return was received in weedy check. Similarly Kamrozzaman et al. (2015) and Singh and Gosh (1992) described that weed control in wheat through herbicides are more economical than hand weeding. Original data given in parenthesis was subjected to square root(√ ) transformation before analysis CONCLUSION Based on field research experiment, it is concluded that among all the herbicides and their mixtures tested, application of aclonifen 500 + diflufenican 100 SC @ 1750 + 1750 g/ha at 35 DAS was found most effective against broad-leaf weeds except Convulus arevensis in wheat and it was also recorded with significantly lower weed density and weed dry matter accumulation and higher weed control efficiency (71.1 %), net returns (65,733 Rs./ha) and B:C (1.82) at harvesting, which were 52.2 and 47.5 lower than weedy check and 71.1, 193.7 and 41.1 percent higher than weedy check plot, respectively.
2020-11-26T09:02:23.251Z
2020-10-30T00:00:00.000
{ "year": 2020, "sha1": "1bbf5d38ddc9463697f9ace5b7187b366444e52d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.18782/2582-2845.8268", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "159a6c14fc6f13e91cfda395c07f0d6583097fa5", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
246925891
pes2o/s2orc
v3-fos-license
A Context-Independent Ontological Linked Data Alignment Approach to Instance Matching Linking data by finding matching instances in different datasets requires considering many characteristics, such as structural heterogeneity, implicit knowledge, and uniform resource identifier-oriented (URI-oriented) identification. The authors propose a context-independent approach to align linked data through an alignment process based on the ontological model’s components and considering data’s multidimensionality. The researchers experimented with the proposed approach against two methods for aligning linked data in two datasets and evaluated precision, recall, and f-measure metrics. The authors also conducted a case study in a real scenario considering a Brazilian publication dataset on computers and education. This study’s results indicate that the proposed approach overcomes the other methods (regarding the precision, recall, and f-measure metrics), requiring less work when changing the dataset domain. This work’s main contributions include enabling real datasets to be semi-automatically linked and presenting an approach capable of calculating resource similarity. INTRODUCTION Publishing or maintaining Linked data on the Web goes beyond making datasets available through resource description framework (RDF) serializations, which is the innovations and applications cornerstone of semantic web and information systems (Avila-Garzon, 2020). Then, newly published data must be linked to other existing datasets. However, creating links between datasets requires careful analysis by an expert, which, despite being an effective approach, is not scalable, given that the amount of data published is constantly increasing. Consequently, the manual publishing process is unviable. Therefore, to efficiently build the Web of data, there must be solutions capable of linking data automatically or semi-automatically. Automatically linking data is a problem recognized by many communities. In Databases, the problem is known by record linkage (Gu et al., 2003;Karr et al., 2019), which aims to identify and link resources that are judged to represent the same real-world entity. Additionally, it is possible to find other terms for this problem, such as the entity resolution problem (Menestrina et al., 2005;Wu et al., 2020), deduplication (Sarawagi and Bhamidipaty, 2002;Xu et al., 2017;Yang et al., 2019), and Instance matching. Instance matching is the term that the Linked data community uses to refer to the problem. In this community, the main goal is to find matching instances in different datasets (Abubakar et al., 2018). However, instance matching has additional characteristics (Castano et al., 2011;Mountantonakis & Tzitzikas, 2019;Azmy et al., 2019), such as (i) structural heterogeneity, which refers to variation in the structure of the instances; (ii) implicit knowledge, which refers to the characteristics and constraints exhibited by the domain; and (iii) URI-oriented identification, which refers to reusing URIs to identify new information about existing instances. Thus, there is a need for specific solutions for the correct execution of the instance matching process. To identify and link resources on the Web, the community has been developing a growing number of solutions. The Ontology Alignment Evaluation Initiative (OAEI) conducts an annual evaluation consisting of aligning two predefined datasets and comparing the alignment generated by the solution with the reference alignment. However, according to Homoceanu et al. (2014), the solutions are not ready to automatically align data despite the good results. Most works are used only on conventional OAEI datasets with small ontologies (Ferranti et al., 2021), and there is a small number of real-world ontology matching application approaches (Otero-Cerdera et al., 2015;Ferranti et al., 2021). Also, no technique stands out from the others in all aspects (Xue & Tang, 2017). This study proposes a context-independent approach for the alignment of Linked data through an alignment process that considers aspects of the ontological model's data and characteristics. Data properties and relationships drive the alignment of resources/instances. For this purpose, a cascade alignment approach is proposed. Moreover, the proposed approach addresses the alignment between real datasets, which enables reliable alignment of datasets distributed on the Web. This work provides the following contributions: i) development of a context-independent process for the alignment of Linked data; ii) enabling the execution of the alignment directly in the data storage; and iii) presenting a real-world case study dealing with heterogeneity and data quality issues. Then, this research targets the following problem: General Problem: How to determine that two instances refer to the same real-world entity? Currently, strategies based on similarity, learning, rules, and context (Castano et al., 2011;Abubakar, 2018) have been used to solve the problem. However, these instance-matching tools are not ready to reliably align real-world data automatically (Otero-Cerdera et al., 2015;Ferranti et al., 2021;Homoceanu et al., 2014). Thus, the following specific research questions arise: RQ1: How can the effectiveness of instance-matching tools be improved? RQ2: How effective is the solution in a real-world scenario of instance-matching? The first study is experimental. Its general objective is to evaluate the effectiveness of instancematching tools (Risk Minimization based Ontology Mapping -RiMOM, AgreementMakerLight -AML, and the proposed approach), using real-world data (based on a specific dataset from the OAEI benchmark). The experiments' purpose is to show that the proposed approach -despite not containing computations specifically developed for the datasets -can effectively create instance matches. The experiment evaluates the instance matching tools considering precision, recall and f-measure, and used an OAEI dataset (DOREMUS task 1 ) in which both RiMOM and AML were previously tested (IM-OAEI 2016). Then, the authors performed a real-world case study using the proposed approach, matching information about publications and researchers' curriculum, publishing linked data and answering competence questions raised during requirements gathering. This article is organized as follows. The first section (Related Work) compares this research with related works. The next section (Alignment Process) presents the alignment process proposed in this paper, describing the process's steps and implementation. The next section (Experiment) depicts the experimental design conducted in this research. Then, the authors present the case study conducted in a real-world scenario using the approach proposed in this paper. The final section presents the concluding remarks of this paper. ReLATeD WORK Zamazal (2020) presents a survey of ontology benchmarks for semantic web ontology tools. More specifically, Abubakar et al. (2018) present a literature review on instance-based ontology matching. They described a general architecture, where the ontologies to be compared are loaded, and a mechanism, for instance-matching is performed, then there is a similarity calculation that supports creating relationship matching, and the mappings are recorded. Abubakar et al. (2018) Mountantonakis & Tzitzikas (2019) present a survey on large-scale semantic integration of linked data. The authors show eight scalable tools performing instance matching: Silk (Volz et al., 2009), LIMES (Ngomo and Auer, 2011), PARIS (Suchanek et al., 2011), WebPie (Urbani et al., 2012), LINDA (Böhm et al., 2012), MinoanER (Efthymiou et al., 2016), CoSum-P (Zhu et al., 2016) and MINTE (Collarana et al., 2017). Table 1 shows a summary of related work. In this table, the focus on instance matching is considered when the work only describes dealing with the instances, not considering possible matchings between classes and instances (here called entity matching). The number of datasets is related to the input that the work mentioned. Although the works adopt different approaches, all of them consider the ontology or graph structure (directed by ontology), different from the work presented in this paper, based on the ontology structure. Although the tools can find matches between instances, they are still deficient in terms of some criteria, such as their use of specific computations for the benchmark, in addition to their minimal utilization of ontologies, which are used only for metadata generation, intending to choose between the matching approaches available. Unlike the tools mentioned, the proposed tool uses ontologies to guide the process of instance matching. Additionally, this approach allows the user to define how the alignment should be performed. Another difference between the proposed tool and previously developed tools is the cascade alignment, which uses instances of concepts related to the concept whose instances will be aligned. Cascade alignment goes beyond instances -it explores the existing relationships. From this, it is possible to find new matches. Additionally, the proposed tool allows matches between instances to be stored directly in the triple store database in which they are stored. As evaluation collections, Mountantonakis & Tzitzikas (2019) presents the OAEI benchmark and Daskalaki et al. (2016) benchmarks. Both works agree that the most important challenge for judging the performance of instance matching techniques and tools is the OAEI. Initially, the OAEI evaluated only ontology alignment tools, beginning with evaluating solutions for aligning data in 2009. From 2009 to 2017, the OAEI track focused on instance matching was called instance matching (IM) track, and after that, a track for instance and schema matching related to knowledge graphs was introduced, called Knowledge graph track. Table 2 shows the tools that participated in the OAEI IM and KG tracks from 2009 to 2020, where RiMOM and LogMap have the most appearances. Complementary, Table 3 shows a brief overview of the results from OAEI tracks related to instance matching from 2016 to 2020, considering the f-measure. Three tools are highlighted: AML, LogMap and RiMOM. Until the 2016 edition, RiMOM presented the best results in general in the IM track. From 2016 to 2020, AML is the only tool that participated in all editions. Although AML results for 2017 to 2020 are not the best ones, it has good results compared to the others. Despite all tools presented by Mountantonakis & Tzitzikas (2019) and the ones participating in OAEI editions, AML and RiMOM were selected to compare with the work presented in this paper due to their prominence and consistency in OAEI, specifically in the section regarding IM or, more recently, the Knowledge Graph track. RiMOM (Li et al., 2009;Zhang et al., 2016) is a tool for aligning Linked data. It implements a considerable number of approaches for alignment, the choice of which is made based on the metadata extracted from the ontology. Furthermore, RiMOM-2016 uses an inverted index to index the objects and generate candidate pairs for possible alignment. Pairs are generated when two resources share at least one predicate and one object. RiMOM-2016 uses the ontologies only to align the properties but also as input for metadata generation. RiMOM was one of the main instance matching tools until 2016. AML (Faria et al., 2016) is an ontology alignment tool initially based on lexical similarities and techniques, emphasizing the use of external sources as a background. AML relies on three alignment algorithms for instance matching: HybridStringMatcher, ValueStringMatcher, and Value2LexiconMatcher. The first algorithm uses several approaches (comparisons between sentences and between words) to generate the similarity, and this hybrid approach also utilizes WordNet. The second algorithm uses value mapping to calculate the similarity, penalizing pairs in which annotations or data properties are not the same. Finally, the third algorithm unites the other two approaches. Although AML has different alignment algorithms in the tool, they all work only at the data level. Consequently, the characteristics of the properties are disregarded throughout the matching process. AML has presented one of the top results in the OAEI instance matching track and Knowledge Graph Track since 2016. While RiMOM was one of the main instance matching tools until 2016, AML has been one of the top tools since 2016 in the Knowledge Graph Track, which deals with instance matching. Alignment Process The process consists of four main steps: selecting datasets, identifying concepts, listing resources, and aligning data. Each step of the process will be described in the following subsections. Step 1: Selecting Datasets The step involving selecting datasets aims to determine which datasets will be aligned. As the scope of the process is Linked data, ontologies/vocabularies can support the data modeling in publishing processes. Then, the dataset is structured in triples and uses concepts modeled on ontologies/vocabularies. Step 2: Identifying Concepts After choosing the datasets, Step 2 consists of choosing the main and related concepts. Then, two SPARQL queries were developed. The first query explores the ontology, especially the rdfs:domain and rdfs:range relationships of the object properties (see Code 1), whereas the second query explores the data and relationships established by the instances. In the query (Code 1), line 4 retrieves all the ontology or vocabulary concepts. In line 5, a restriction is applied, where the concepts must be in the domain or range of a relationship. Consequently, an instance of this concept will be the subject or object of a triple (see Figure 1). The query presented in Code 2 comprises two parts because the concept can model instances that are the subject or object of a relationship. In the first part, the selected concept represents the subject of the triple. It is possible to retrieve the concepts that model the related instances (objects) using the instances' relationships. In the second part, the inverse occurs: the concept represents the triple's object, and the concepts that represent the subjects are retrieved. As a result of Code 2, a list containing the concepts related to the (main) concept chosen is provided. At this point, users must choose which related concepts they want to use to improve the alignment of the chosen concept. This decision will influence both the time that the process will take to conclude and the number of resources aligned at the end of the process because, for each related concept, there will be a new execution of steps (iii) and (iv). This loop is necessary because some alignments will only be possible through the relationship between these concepts. Step 3 The step of listing resources can be understood as retrieval of the resources about the concepts. It is important to highlight that the listing/retrieval of resources from the knowledge database can be executed more than once during the process, which generates a set of resources for each concept chosen. Additionally, this step is responsible for generating candidate pairs, in which the resources of Dataset D1 are compared with the resources of Dataset D2. Step 4: Aligning Data The data alignment step is divided into two activities, (i) simple alignment and (ii) cascade alignment, detailed in the following sections. Simple Alignment It is necessary to perform procedures to align the resources, including data processing, resource comparison, and similarity analysis. The first procedure -data processing -refers to transformations in the properties of the resources. These transformations are necessary to assist the similarity algorithms to analyze the similarity between the resources better. In the comparison procedure, each of the properties is analyzed. If a property does not pertain to one of the resources, it is exempted from the comparison. Figure 2 shows a comparison between the properties of each resource. Two equations are used to define the similarity between the instances. Equation (1) defines the set of properties considered during the comparison between the resources. The set is obtained from the difference between the largest set of properties and the set that should be disregarded. Therefore, where: • Pr1 -Resource 1 set of properties; • Pr2 -Resource 2 set of properties; • Pd -Set of properties that must be not considered; • Max(Pr1, Pr2) -Retrieve a set with the maximum number of properties; Figure 2. Comparison between resources Equation (2) concerns the similarity function between resources -this equation can be understood as the mean of the similarities between two resources. This approach was chosen not to favor any of the partial similarities. However, there may be other more appropriate functions for calculating the similarity between resources. where: • S -Similarity function; • V(R,P) -Property value of P in a resource R; Cascade Alignment Cascade alignment is named so because of the linkage between the necessary activities: (i) retrieving instances that pertain to the related concept, (ii) aligning instances that pertain to the related concept, (iii) retrieving instances that pertain to the main concept, and (iv) aligning instances that pertain to the main concept. The name is also intended to refer to the cascade development model (Royce, 1987), the first software development model. The cascade development model and the cascade alignment approach share some similarities. The shared similarities include how the activities are executed, which is sequential. Additionally, each activity can only begin when the previous activity is completed. Another shared characteristic is the fact that the whole project is planned before execution. Unlike a project that uses the cascade model, in which the entire project must be completed in the final stage, it is assumed that matching between instances will only be considered to be concluded when all the related concepts have been used in the matching process. It is worth highlighting that a "cascade" is generated for each related concept selected. Figure 3 shows the relationship between the concepts and the cascade. Let us imagine that someone wants to discover which authors are present in two datasets simultaneously. For this, he/she wishes to use the papers registered in both databases. Thus, the main concept and the related concept are the Author and Publication, respectively, Retrieving Instances from a Related Concept, Aligning Instances of the Related Concept, Retrieving Resources of the Main Concept, and Aligning Retrieved Resources. Implementation of the Process In the process implementation process definition, the authors relied on existing literature reviews (Feitosa et al., 2018;Barbosa et al., 2021). Some components were developed to perform the proposed process: similarity analysis, data persistence, alignment generation, logic between the steps, and pre-processing (see Figure 4). Some components have been reused from other work (in black); an example is the lexical similarity, which contains several algorithms for detecting similarity between texts. Other components are newly developed (white), such as those responsible for detecting resource similarity, alignment, etc. Although various solutions contain algorithms for the calculation of similarity and alignment between resources, the development of an approach that contemplates problems faced when working with real databases (e.g., accentuation, absence of properties, and formatting) was chosen (Castano et al., 2011;Ferrara et al., 2008). The pre-processing component's function is to perform treatments on the texts that will be applied to the similarity function. Some examples of treatments are the treatment of accents and punctuation. The similarity component is divided into two subcomponents: lexical similarity and resource similarity. The first uses metrics that analyze the similarity between words and texts, which include the Levenshtein (1966), cosine (Singhal, 2001), and Jaro-Winkler (Winkler, 1990) techniques, amongst others. The second component, which refers to the resource similarity, uses the first component, and its function is to generate the similarity between resources. An approach based on the semantic subgraph technique (Wang and Xu, 2009) calculates the resources' similarity. In practice, a semantic subgraph refers to the triples related to any resource and agrees with the ontological modeling. Figure 1 represents a subgraph that relates an author and the author's publications. In addition to object properties, a subgraph also has data properties associated with both the main resource (author) and related concepts (Production). The alignment component is responsible for determining -in accordance with the values obtained in the similarity step -whether the analyzed resources are related to the same real-world entity. This component utilizes acceptance thresholds, which are determined beforehand, to determine whether alignment must be performed. For this reason, the alignment process is not an automated task because it requires the values to be adjusted. There are various methods for determining the threshold value, ranging from executing multiple times and analyzing the best cost/benefit between precision and recall to using techniques that update the threshold value dynamically. The core component is responsible for concentrating and coordinating the settings during execution. Currently, the core component has three modalities for matching between instances: simple alignment, which is executed in all modalities and can also be executed independently; cascade alignment, which is performed when a concept related to the concept whose instances are to be aligned is chosen; and multi-cascade alignment, which occurs when more than one related concept is chosen. The persistence component is responsible for materializing the matches found by the alignment component. For this, JOINT is used (see Figure 5). According to Holanda et al. (2013), JOINT is a framework to facilitate the development of ontology-based applications. The features presented by the JOINT tool allow operations (Virtuoso, OWLim, etc.) to be performed directly on the triple server. Additionally, this tool also supports SPARQL queries' execution, which, through a translation system, transforms the triples into Java language objects. The experiment This study presents a context-independent data alignment process consolidated into a tool for performing instance matching. In this experiment, the evaluation of instance matching tools uses two datasets from OAEI 2016 Instance Matching (IM) track and their reference alignments. OAEI 2016 IM track was the only edition combining RiMOM and AML tools, while DOREMUS tasks were chosen because they contain real-world data from two major French cultural institutions. The two datasets used in the experiment were: nine heterogeneities (9-heterogeneities) and false positives trap (falsepositives-trap), as the first considers different types of heterogeneities and the second focuses on false positives matching. The experiment looks to evaluate if the proposed approach improves instance-matching tools' effectiveness, then it is related to RQ1 (How can the effectiveness of instance-matching tools be improved?). Formally, the objective of this experiment can be defined as follows: to analyze instance matching tools to compare them in terms of their effectiveness from the point of view of generating matches between instances -in the context of data alignment between datasets -to use the best approach in a real-world case study. The instance matching tools (AML, RiMOM-2016, and the proposed tool) were compared in terms of effectiveness (Precision, Recall and F-measure). Figure 6 shows the execution steps of each alignment: • Construct container with the settings reset; • Load data; • Execute tool inside the container; • Collect alignment data; • Destroy container; • Analyze data. The experiment has two possible scenarios to evaluate the instance matching tools, with one execution per tool, thus totaling six executions. Scenario C1 was applied on the 9-heterogeneities dataset, and scenario C2 was applied on the falsepositives-trap dataset. The following instruments will be used to conduct the instance-matching experiment: • Intellij IDEA 2016.3 for development of the code and execution of the proposed tool; • Virtuoso RDF Store -07.20.3217; • OpenJDK 64-Bit Server VM (build 25.111-b14, mixed mode). The entire experiment was conducted in a container, making it possible to isolate the applications and the executions' effects. Then, the applications were executed in identical environments without generating side effects on each other. After the execution, the resulting alignments were analyzed based on precision, recall, and f-measure. Figures 7 and 8 show the results corresponding to each of the tools in each of the scenarios. Table 4 summarizes the data obtained for each scenario. As presented in Table 4, the proposed tool had a precision of 1 for scenario 1 and 0.906 for scenario 2. Given the proposed tool's results in the two alignment scenarios, with a recall of 0.875 for scenario C1 and 0.707 for scenario C2, despite the good performance, the proposed tool produced values equal to at least one of the other tools in each of the scenarios. According to the results presented in Table 4, the proposed tool ranked first in scenario 1, with an f-measure of 0.933, and second in scenario 2, with 0.794. The researchers used Fisher's test (Fisher, 1922) to compare the pairs of metrics. The results obtained in the experiment were used as input according to the configuration listed in Table 4. As a result, the statistical test yielded a p-value of 0.8333, which indicates that the tools have similar effectiveness in terms of the metrics. The experiment conducted sought to evaluate the effectiveness of the Linked data alignment tools in terms of precision, recall and f-measure. These variables were evaluated separately in each of the two scenarios, C1 and C2. In scenario C1, the data exhibited nine types of heterogeneity (e.g., multilinguality, difference in the catalogues, phonetic difference, and different degrees of description). In scenario C2, the data exhibited similar instance sets, with only one match possible, which indicates that the other instances are false positives. Given the statistical test results, the tools have similar effectiveness for both of the scenarios analyzed. However, from the analyses performed on the metrics, it was possible to verify that the proposed tool stood out in one of the scenarios, C1. Nevertheless, it should be emphasized that the proposed approach does not use specific implementations for the analyzed datasets, which enables it to be easily used in other contexts. Although the experiment was designed to minimize possible threats that would compromise its conclusions, some threats should be mentioned. One possible internal threat to the experiment's validity is the selection of the experimental units because the OAEI benchmark provided the datasets used in the experiment, and no other datasets and benchmarks were used. The experimental units were executed in only one configuration setting and only one version of the tools. It is possible that the number of tools and scenarios is not sufficient for observing significant differences in the effectiveness between the approaches used for instance matching. Additionally, one must consider that the response time was not considered in the experiment. Owing to the small amount of data per dataset, the number of instances per dataset may not be sufficient to observe significant differences in the associated metrics. CASe STUDy: BRAZILIAN COMPUTeRS AND eDUCATION COMMUNITy DATA Members of the Special Committee on Informatics in Education (IE) were asked to suggest questions of interest (possible competence questions for the study, which is part of the requirements gathering) to the Brazilian IE community. As a result, a set containing more than 30 questions was produced. Table 5 lists the questions proposed by the members. To answer some of the questions asked, it is necessary to cross-reference different sources of information from the researchers and their publications, namely the Brazilian Journal of Informatics in Education (Revista Brasileira de Informática na Educação -RBIE) 2 , Workshop on Informatics in School (Workshop de Informática na Escola -WIE) 3 , Brazilian Symposium on Informatics in Education (Simpósio Brasileiro de Informática na Educação -SBIE) 4 , and the researchers' curriculum, which in Brazil is available at a platform called Lattes 5 . The authors assume there may be a dependence of distinct elements of our datasets (i.e., instance equivalence, class equivalence, and class-instance equivalence) in the same dataset and considering different datasets. It is worth highlighting that the datasets were made available as XML files and it is assumed the datasets are complete, corresponding to the whole population of the corresponding publications and curriculum. The solution proposed for the data models assumed that the data conform to the schemas (i.e., data from different schemata need to be mapped to a common schema before performing instance matching). Then, the XML files had to be converted to RDF, and dac 6 and Lattes ontologies were used to model the data. The first ontology is intended to model the publishing domain (see Figure 9), whereas the second was constructed to model the lattes domain (see Figure 10). Another assumption is that the OpenRefine 7 tool can transform the data to RDF, with the extension to support RDF. This tool was selected owing to its ease in creating the transformation templates. After the data transformation, the ontologies and the data were persisted in Virtuoso 8 . Figure 11 illustrates the data conversion process. With the data conversion process, 1.1 million triples were generated, which were distributed as follows: 96%, or 1,094,307, triples pertaining to Lattes; 1.61%, or 18,363, triples pertaining to Figure 11. The conversion process to RDF the SBIE; 1.21%, or 14,601, triples pertaining to the WIE; and 1.1%, or 12,503, triples pertaining to the RBIE. This research assumed that (i) authors may publish one or more papers in any of the publication datasets; (ii) each paper must have at least one authors; (iii) each paper may have one or more authors; (iv) a paper's author (in the RBIE, SBIE and WIE datasets) may not be registered in the Lattes dataset, thus there may be no corresponding Lattes entity for some authors; and (v) although there may be different authors with similar names, it is possible to find the corresponding entity in the Lattes dataset as it also contains the publications for each author. Some concerns are related to the data quality in the datasets, i.e., there may be different name formatting (e.g., first name + middle name initials + surname in one instance and first name + last name in another instance); and untrue information (e.g., false e-mail). It is expected that the matching algorithms can support dealing with these issues, but the Lattes dataset is more reliable and should help to solve the inconsistencies. execution of the Process This section describes how each of the steps of the matching process was performed Selecting Datasets This step refers to the selection of datasets used as input for the instance matching process. It should be noted that two or more datasets can be selected. Thus, the datasets RBIE, SBIE, WIE, and Lattes were selected. Identifying Concepts This step involves selecting the concepts (main and related) that are used in the process. Currently, there is only one restriction regarding the selection of the concepts, in that it is possible to select only one main concept. Main Concept: To identify the main concept by the user, the query presented in Code 1 was executed. Table 6 presents the results obtained from the execution of the query. Table 6. Ontology concepts and quantity of instances The Author concept, which represents the second-highest number of instances in the data, was selected as the main concept. The concept was chosen not because of the number of instances but for strategic issues as the purpose of this case study was to cross-reference information about researchers. Related Concept: The query presented in Code 2 was executed to select the related concept that would be used. The Contribution concept was selected and used during the cascade alignment as the related concept returned from the query. Note that more than one related concept can be selected. Listing Resources The resource list is generated automatically based on the previously selected concepts. From the list of resources, the candidate pairs are assembled. It is worth noting that in the study in question, a dataset can contain more than one instance for the same real-world entity (e.g., more than one URI for the same researcher). Thus, candidate pairs were generated within the same dataset, which characterized the internal alignment. Aligning Data The alignment step is responsible for determining the matches between the instances. In this process, there are two alignment approaches, simple and cascading. In the simple approach, the resources are compared directly, utilizing the properties and their characteristics. In the cascading approach, the resources are compared based on the related resources. Main Concept: As in other instance matching approaches (Zhang et al.., 2016), functions that analyze the similarity between two resources were also used. The function presented in Code 2 was used to determine the similarity between the pairs. This similarity function generates values between 0 and 1, with 0 indicating totally distinct and 1 denoting equal. In addition to the function used, thresholds were defined. It means that similarity values greater than the threshold were considered to be matching. Initially, the threshold value was defined arbitrarily and later adjusted with the help of tests. So, the same dataset was aligned several times using a threshold value for each execution. Finally, the threshold value was set to 0.88. Related Concept: Cascade alignment consists of aligning instances of the main resource based on related resources. This step of the process is performed for each of the related concepts selected in the concept identification step, and it consists of three activities: • Aligning related resources: simple alignment is performed between the instances that pertain to the related concept; • Retrieving instances of the main concept: based on the alignment between instances of the related resources, the instances that pertain to the main concept are retrieved; • Aligning instances of the main concept: based on the retrieved instances, new candidate pairs are generated and become input for the simple alignment. Results The results presented in this section are separated into two parts. The first part consists of the alignments generated, whereas the second part addresses the answers to the community's questions. Alignments After alignment by the tool, a survey was conducted (a crowdsource with Brazilian researchers in Computers in Education), from which it was possible to generate information -such as the number of resources that are repeated in the databases, the total number of resources aligned with the Lattes profile, and the precision, recall, and f-measure (Goutte and Gaussier, 2005) -to analyze the reliability of the alignments (see Table 7). It is worth noting that the reference alignment was generated manually with the help of domain experts. Responses The instance matching process identifies instances that refer to the same entity and allows complementary information to be integrated. Thus, it was necessary to consult more than one database simultaneously to answer the community's questions. Owing to the number of questions asked, only a few of them will be presented below. Another factor that should be highlighted is the problem with the data that were provided by the authors (such as multiple authors for the same paper and different name formatting). Additionally, untrue information was provided -e.g., 77 authors used the same e-mail (URL: author@email.com) to avoid spam. Q1 -Where in Brazil (states) are IE researchers located? Through this question, it is possible to know where the researchers who published in the RBIE, SBIE, or WIE work. This information is obtained through the professional address found in the Lattes curriculum profile of each researcher. Thus, to answer this question, it is necessary to identify these researchers' profiles in the Lattes curriculum. Query presented in Code 3 retrieves the professional address of the researchers who published in the WIE (for other publication venues, it is easily adapted). In line 9, the owl:sameAs transitivity property is used to retrieve all of the matching profiles. For the query not to loop, a query of up to five elements composing the transitivity was established. The five elements were chosen manually, and with this value, it was possible to attain all the possible properties through transitivity. Q8 -Where did the IE researchers in Brazil conduct their doctoral studies? Through this question, it is possible to know where the researchers who published in the RBIE, SBIE, and WIE concluded their doctoral studies. Similar to the professional address information, this information can also be obtained by cross-referencing between these publishing databases and Lattes. Query presented in Code 4 retrieves the institution where the researchers who published in the WIE completed their doctoral studies. Figure 12 shows the concentration of doctorates completed per university. # subject must be in wie database filter regex(?s,"http://www.ic.ufal.br/dac/author/wie/(\\d)+$","i") # object must be in lattes database filter regex(?g,"http://www.ic.ufal.br/dac/author/lattes/(.)+$","i") CONCLUSION This study presented a semiautomatic approach for aligning real-world datasets and scenarios. This proposed approach is necessary due to the need for solutions capable of reliably aligning data with the domain's least possible knowledge. Additionally, the solution lets the alignment be executed directly within the triple storage such that there is no need to generate files to align. A case study was performed to evaluate the proposed approach in a real scenario. The proposed approach aligns RBIE, SBIE, WIE, and Lattes datasets. The use of the alignment solution enabled various questions to be answered. Additionally, it was possible to note problems related to the information provided by authors submitting their work. This research also conducted an experiment to evaluate the proposed approach and compared it to effectiveness with other tools using the precision, recall, and f-measure metrics. In the experiment, these metrics were evaluated in two alignment scenarios, in which the proposed approach obtained good results compatible with the top instance matching tools. Despite not having the best values in either evaluation, the proposed approach stands out due to the absence of specific implementations for the dataset or benchmark (for instance, it does not use the reference alignment for improving its processing), requiring less work when a context change is necessary. This paper presents the following contributions: 1. An alignment process for Linked data based on a general approach, independent of specific algorithms for the datasets and exemplars, with good f-measure results. 2. The execution of the alignment directly within the triple storage. 3. A real-world case study that deals with heterogeneous datasets and data quality issues. Future studies can be performed to analyze the tool's effectiveness with datasets with different characteristics (e.g., domain, number of triples, and quality). Moreover, other studies will be conducted with the following objectives: • Automating the whole alignment process -one possible method to do this would be to choose the related concepts automatically. • Optimizing the performance -given that the related concepts can be aligned in parallel, possible approaches include parallelism and distribution. • Improving the quality of the calculation of similarity between resources -one possible approach would be the composition of similarity functions and the identification of the most significant characteristics for identifying similarity.
2022-02-18T16:19:54.674Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "123c0ae10f70dd4c2cea702183ce5a24d346c06f", "oa_license": null, "oa_url": "https://www.igi-global.com/ViewTitle.aspx?TitleId=295977&isxn=9781799893967", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7518a0d43bbcb97926245b39bad260b2b7626f42", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
245887826
pes2o/s2orc
v3-fos-license
Editorial: Pharmacogenetics and Pharmacogenomics in Latin America: Ethnic Variability, New Insights in Advances and Perspectives: A RELIVAF-CYTED Initiative Molecular Genetics Laboratory, Clinical Biochemistry Department, School of Chemistry, Universidad de la República, Montevideo, Uruguay, Latin American Network for the Implementation and Validation of Clinical Pharmacogenomics Guidelines (RELIVAF-CYTED), Madrid, Spain, Laboratory of Chemical Carcinogenesis and Pharmacogenetics, Department of Basic-Clinical Oncology (DOBC), Faculty of Medicine, University of Chile, Santiago, Chile, Genetic Division, Department of Medicine, INFIBIOC, Hospital de Clínicas José de San Martín, Buenos Aires University, Buenos Aires, Argentina, Clinical Biochemistry Department, Hospital de Clínicas José de San Martín School of Pharmacy and Biochemistry, Institute for Research in Physiopathology and Clinical Biochemistry (INFIBIOC), University of Buenos Aires, Buenos Aires, Argentina INTRODUCTION Since the pharmacogenetics and pharmacogenomics (PGx) field started to rise, the information about the relationship between actionable genes, genotypes, and response to drugs has increased exponentially (Nicholson et al., 2021). There is evidence of the utility and impact of genetics in the choice of therapeutic regimens improving their effectiveness and safety (Arbitrio et al., 2021). Even some international efforts have created clinical guidelines that allow to implementation of pharmacogenomics in daily clinical practice. In addition to clinical outcomes, Economic benefits have been associated with the translation from "the bench to the bedside". Moreover, several major PGx expert organizations such as the Clinical Pharmacogenetics Implementation Consortium (CPIC, 2021) and the Dutch Pharmacogenetics Working Group (DPWG), provide gene-drug guidelines for actionable variants. In addition, Ubiquitous Pharmacogenomics (U-PGx, 2021), the Latin American Network for the Implementation and Validation of Pharmacogenomics Guidelines (RELIVAF-CYTED, 2021), and the Southeast Asian Pharmacogenomics Research Network (SEAPharm, Chumnumwat et al., 2019) have investigated pharmacotherapeutic recommendations guided by pharmacogenetics. In this respect, based on scientific evidence the Food and Drug Administration (FDA) has published a list of PGx biomarkers for drug labelling (FDA, 2021). Even though high-quality research addresses the utility of implementing pharmacogenetics programs in clinical practice, most of this evidence comes from the United States or Europe. Moreover, commonly, it does not include the Latin American population, or when the guidelines do, it is considering as one big group. Some recently regional formed scientific societies (RELAGH, 2014; SOLFAGEM, 2021) and international efforts (RELIVAF-CYTED) are looking to shorten the region's gap of evidence and information. In this respect, Latin America is a vast region with some characteristics that do not allow easy implementation of research made in other settings (Quiñones et al., 2014). It is one of the most genetically diverse areas having frequencies or polymorphisms not found in other regions. There is a lack of high-quality Latin American population-focused research about the relationship between specific genes and drug response, and, also, there is a lack of knowledge of frequencies. Altogether, there are many disadvantages to implementing pharmacogenetics in clinical practice in Latin America. Sixteenth articles are included in this issue, eleven original/ experimental research, two brief research reports, one review article, one case report, and one opinion, covering different and complementary aspects of the pharmacogenomic research in this region. Workflows of data-driven modeling and model-driven experimentation have led to the development of in silico algorithms including pharmacogenomics data of disease risk at the patient-population level (Wolkenhauer et al., 2014). In this Research Topic four predictive models based on pharmacogenomics have been developed to identify patients who were suitable for preventive genotyping. Although the models must be validated with a larger number of patients and do not necessarily apply to all populations, they are a very good first approximation to predict the incidence of adverse effects among patients undergoing different therapies in Latin America. Miranda et al. The PGx of the immunosuppressive drug Tacrolimus (TAC) has been extensively studied, and according to the CPIC guidelines (Birdwell et al., 2015) an increase of starting dose for CYP3A5 expressers is recommended, followed by a therapeutic drug monitoring to guide dose adjustments. Thus, two manuscripts address the issue in Chilean kidney transplantation patients, one in children (Krall et al.), the other in an adult population (Contreras-Castillo et al.), for immunosuppressive treatment (cyclosporine and tacrolimus) after transplantation. The antiretroviral treatment (ART) is generally not well tolerated and most patients present important adverse effects (ADR) that potentially limit treatment adherence or lead to this interruption (Saag et al., 2020). Poblete et al. retrospectively evaluated the UGT1A1*28 and CYP2B6 c.516G > T frequency and their influence on major ADRs in 67 adult HIV patients from Chile, as a starting point to validate in the nearest future CPIC guidelines in Latin America. Two investigations referred to children with acute lymphoblastic leukemia from different angles. From Mexico, Gándara-Mireles et al. analyzed the frequency distribution and the association between the illness and the most common polymorphisms in ABCC1, NCF4, and CBR3 genes. The influence of TPMT-VNTR polymorphism on 6-MP related hematological toxicity was confirmed by The studies performed in Duchenne Muscular Dystrophy (Luce et al.), cardiovascular disease (Gálvez et al.), and severe encephalopathy patients (Kravetz et al.) emphasized the importance of identifying both already known and novel variants to differential diagnosis and patient management. Luce et al. described the mutational spectrum of DMD gene in 400 Argentinian patients with a clinical diagnosis of dystrophinopathy. Gálvez et al. reported a significant association between APOB, APOE, and MTHFR polymorphisms and lipid levels, especially in women, among 193 healthy subjects from Chile. Identifying a genetic variant in KCNT1 channel in an Argentinian pediatric patient with a severe encephalopathy was crucial to include quinidine in the treatment regimen as an antiepileptic drug (Kravetz et al.). Since discovering the non-coding RNA, its clinical relevance has become increasingly important. In particular, inter-individual variability in drug response, both in efficacy and toxicity, could be due to both, variation in miRNA gene sequences and circulating miRNA levels (Latini et al., 2019;Ubilla et al.) found an increase in miRNA-33b-5p levels in hypercholesterolemic patients under atorvastatin therapy and proposed this microRNA as a biomarker to follow the response to statins. Ruiz et al., worried about BCR-ABL1 tyrosine kinase inhibitor resistance in chronic myeloid leukemia patients, observed a global decrease of microRNA levels in resistant cells, founding a promiser field for future studies. On other hand, the biobanks allow access to many wellclassified, high-quality samples and establish the indispensable conditions for achieving reproducible research results (Coppola et al., 2019). Vargas and Cobar, express their opinion about creating biobanks and believe that the same requirements will be necessary to obtain pharmacogenetics information and efficient therapeutic responses in Latin America. Barriers to PGx implementation include a lack of knowledge, training, and confidence among physicians to apply pharmacogenomic tests (Rigter et al., 2020). As such, Undurraga et al. reported on an anonymous online survey addressed to psychiatrists from Chile, observing a low acceptance of PGx tests, but a clear interest from psychiatrists in their potential incorporation into their clinical practice. We proudly present this research topic which aims to address high-quality pharmacogenetic and pharmacogenomic research with a particular focus on the Latin American population and its needs. The main goal is to increase the information on the clinical implementation and the impact of pharmacogenetics in Latin American patients. Also, collecting experience and project the field in the region, looking for strategies and new perspectives. Furthermore, to potentiate the research among countries in the region. writing the manuscript. MR: The conception of the Research Topic idea, writing the manuscript. AL: The conception of the Research Topic idea, writing the manuscript. AL-C: The conception of the Research Topic idea, writing the manuscript. NV: The conception of the Research Topic idea, writing the manuscript. LQ: The conception of the Research Topic idea, writing the manuscript.
2022-01-11T14:24:33.624Z
2022-01-11T00:00:00.000
{ "year": 2021, "sha1": "d22b0f1ff2fd98b38470c044f04f5e9dee1c73e7", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2021.833000/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "d22b0f1ff2fd98b38470c044f04f5e9dee1c73e7", "s2fieldsofstudy": [ "Medicine", "Chemistry", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
119231600
pes2o/s2orc
v3-fos-license
Non-Abelian behavior of $\alpha$ Bosons in cold symmetric nuclear matter The ground state energy of infinite symmetric nuclear matter is usually described by strongly interacting nucleons obeying the Pauli exclusion principle. We can imagine a unitary transformation which groups four non identical nucleons (i.e. with different spin and isospin) close in coordinate space. Those nucleons, being non identical, do not obey the Pauli principle, thus their relative momenta are negligibly small (just to fulfill the Heisenberg principle). Such a cluster can be identified with an $\alpha$ boson. But in dense nuclear matter, those $\alpha$ particles still obey the Pauli principle since are constituted of Fermions. The ground state energy of nuclear matter $\alpha$ clusters is the same as for nucleons, thus it is degenerate. We could think of $\alpha$ particles as vortices which can now braid, for instance making $^8Be$ which leave the ground state energy unchanged. Further braiding to heavier clusters ($^{12}C$, $^{16}O$..) could give a different representation of the ground state at no energy cost. In contrast d-like clusters (i.e. N=Z odd-odd nuclei, where N and Z are the neutron and proton number respectively) cannot describe the ground state of nuclear matter and can be formed at high excitation energies (or temperatures) only. We show that even-even, N=Z, clusters could be classified as non-Abelian states of matter. As a consequence an $\alpha$ condensate in nuclear matter might be hindered by the Fermi motion, while it could be possible a condensate of $^8Be$ or heavier clusters. The ground state energy of infinite symmetric nuclear matter is usually described by strongly interacting nucleons obeying the Pauli exclusion principle. We can imagine a unitary transformation which groups four non identical nucleons (i.e. with different spin and isospin) close in coordinate space. Those nucleons, being non identical, do not obey the Pauli principle, thus their relative momenta are negligibly small (just to fulfill the Heisenberg principle). Such a cluster can be identified with an α boson. But in dense nuclear matter, those α particles still obey the Pauli principle since are constituted of Fermions. The ground state energy of nuclear matter α clusters is the same as for nucleons, thus it is degenerate. We could think of α particles as vortices which can now braid, for instance making 8 Be which leave the ground state energy unchanged. Further braiding to heavier clusters ( 12 C, 16 O..) could give a different representation of the ground state at no energy cost. In contrast d-like clusters (i.e. N=Z odd-odd nuclei, where N and Z are the neutron and proton number respectively) cannot describe the ground state of nuclear matter and can be formed at high excitation energies (or temperatures) only. We show that even-even , N=Z, clusters could be classified as non-Abelian states of matter. As a consequence an α condensate in nuclear matter might be hindered by the Fermi motion, while it could be possible a condensate of 8 Be or heavier clusters. Nuclei are composed of strongly interacting nucleons. In some conditions they can also be described as α clusters [1]. Thus Fermions in one case and Bosons in the other. However, this distinction into different symmetry particles is not the entire story. In fact for Bosons like nuclei an important constraint is the Pauli principle which makes their wave functions antisymmetric in the exchange of two particles, thus they behave as Fermions. Recently, interest has grown for states of non definite symmetry which are classified as non-Abelian [2]. Those are composite particles which do not follow the Boson or the Fermion statistics. Properties of those composites are [2,3]: 1) The ground state of the system is degenerate, not based on symmetry arguments of the wave function, and there exist an energy gap to the first excited state. 2) Exchanging two (quasi)particles does not result simply in a change of sign such as for Fermions or Bosons. More importantly those exchanges might lead or not the system from a possible ground state to another depending on the order of the exchanges. This ordering dependence thus is non commutative or non-Abelian. To see if those properties could be fulfilled by nuclear systems, let us consider the ground state of 12 C. This can be well described by strongly interacting nucleons, but we can also think of grouping four non − identical nucleons to make an α particle, thus the ground state is described as 3α clusters. However, to get the correct ground state energy is not possible if we group the nucleons into deuteron like systems, or if we group the particles into four identical Fermions. The latter cases will result in higher energies. Of course a combination of α particles and nucleons might be possible as well to recover the ground state energy, even though such com-binations might be more probable for heavier systems. The first excited state of 12 C is at about 7 MeV, the breaking into 3 alphas (Hoyle state) [1]. Thus the first two properties of non-Abelian systems might be recovered in nuclear systems but there are other conditions to be fulfilled which we will discuss later when going into the details of an infinite symmetric nuclear matter system. Other important informations can be obtained from heavy ion collisions at beam energies around the Fermi energy. Fragmentation of those nuclei, especially in collisions of α-cluster candidates, e.g. 40 Ca+ 40 Ca etc., show a large production of α particles compared to nucleons, but not of d-clusters [4,5]. Infinite nuclear matter is an idealized case of a system made of neutrons and protons where, to avoid singularities, the Coulomb force is conveniently 'switched off', i.e. the properties of nuclei are corrected for the Coulomb energy and the limit for the mass number A going to infinity is taken. Also we will be considering symmetric nuclear matter, i.e. the number of neutrons N is equal to the number of protons Z: N=Z. This simplifies the discussion as far as the symmetry energy is concerned and more importantly the possibility of clustering of matter into tritons, helions, etc..Those composite are Fermions and their combination could as well represent the ground state of symmetric nuclear matter but, as for deuterium, the resulting energy is higher than the well established energy of symmetric nuclear matter E/A = −15M eV . Thus we will neglect those states in the present work and concentrate on Boson-like composites only. We could write the energy per particle N x of nuclear matter as [6]: where the first term refers to the average kinetic energy of a free Fermi gas withε f = 3/5ε f = 22.5M eV . S=1 or 0 for Fermions or Bosons respectively, but there will be some exceptions as discussed below. N x = A, the number of nucleons, when considering the system made of A nucleons. Otherwise N x = A/A cl where A cl is the mass number of a cluster (2 for d, 4 for α etc.) when it is assumed that nuclear matter is made of clusters. The other terms are due to potential interactions and correlations. The n = 1 term is obtained by taking into account the interaction between pair of particles, and the subsequent terms must involve the interactions between groups of three, four, etc., particles. The coefficients A n in the expansion, eq. (1) are called first, second, third, etc., virial coefficients [3]. The last term BE/N x is the binding energy of the cluster corrected by the Coulomb energy which is simply given by V c = 3 5 1.44 term (which is zero for nucleons) simply reflects the fact that when the density goes to zero, the energy per particle becomes the binding energy per particle of the cluster. ρ = ρ ρ0 is the reduced density of the system, ρ 0 = A/A cl V0 is the ground state density, V 0 its volume. The equation of state (EOS) for nucleons has been derived in [6] and we will adopt the CCSδ3 parametrization here. It includes the possibility of a quark-gluon plasma at high densities which is not important for this work since we will be discussing mainly results at densities lower than the ground state density. We stress that any 'nucleons' EOS gives similar results if they fulfill the conditions discussed below. This EOS will be our reference to compare to the cases where clustering is assumed. In order to fit the parameters entering eq.(1) we will impose the conditions [4,5,7,8]: where the pressure P is given by: and the (isothermal) compressibility, is defined in nuclear physics as: Those three conditions imply that we have enough informations to include third order terms in the expansion in eq.(1), i.e. four body forces. From physical arguments we need A 3 ≥ 0 otherwise the system will collapse to infinite density [6]. The clusters binding energy is obtained from experimental data corrected by Coulomb as discussed above. The Fermi energy has to be included for d, α since they are built of non identical particles. When two or more of those clusters are within a volume in phase space comparable to h 3 , the Planck's constant, their constituents must fulfill the Pauli principle. On the other hand heavier clusters have their own Fermi motion thus the first term in eq.(1) should be zero for those particles. However, we might consider those heavier systems made up of d or α clusters as well and include the Fermi motion in eq.(1). We will discuss the case when heavier cluster are made of nucleons with their own Fermi motion below. Now we consider heavier clusters as a 'braid' of α particles (or d particles for odd-odd clusters). Using the conditions above, eq.(2), we can solve the three equations (1,3,4) and obtain the relevant parameters. The result is plotted in figure 1 where we see that, by construction, all different clusters give the correct properties of infinite nuclear matter, eq.(2). For completeness we report below the values of the parameters for some cases: a)A 1 = −360; A 2 = 180; A 3 = 0MeV for α particles. b)A 1 = −255; A 2 = 202.5; A 3 = −50MeV for d particles. (5) The first important difference is that odd-odd nuclei give A 3 < 0, e.g. eq.(5b), thus their EOS are unphysical. It is also seen in the figure that at high densities, eveneven nuclei have a larger energy than odd-odd and more importantly that matter would collapse into deuterons if their EOS would be correct. Notice the value of A 3 = 0 in eq.(5a) which implies that four body forces are not important for α clusters. This might be a mere coincidence or another indication that nuclear matter can indeed be treated as α clusters. We will discuss the meaning of A 3 = 0 for other cases below. At lower densities the compressibility becomes negative for all systems, which means that the system is unstable. In particular, at zero temperature, the system would prefer clustering into particles of higher binding energy. The situation would be different at finite temperature where the entropy would favor nucleons rather than clusters. A discussion for finite temperature is also interesting and will be the subject of a following work. Thus, as we see from figure (1), the ground state of nuclear matter could well described by different spin = 0 clusters and it is degenerate. Those Bosons still obey the Pauli principle, thus their properties also at finite temperatures, are different from when they are at zero density. Of course when we go from nuclear densities to atomic densities, the Fermi motion of those particles is negligible and the Bosons properties are recovered, i.e we can have a superfluid helium liquid. The ground state is thus degenerate and stable against small perturbations. As we have seen we have to compress or decompress the system in order that different clusterizations are selected: at high densities clusters will dissolve into nucleons (and at even higher densities into quarks and gluons), at lower densities, nucleons will coalesce into clusters of higher and higher binding energy. Thus in those conditions the system might prefer one configuration respect to the others. Those features might give rise to first order phase transitions at low densities or cross-over at higher densities. Clusterization of particles with spin = 1 or higher are not possible as we discussed above but we could ask the question of what should be the energy of nuclear matter if they were build of such clusters. To have an estimate of such an energy, we could leave the energy per particle in eq.(1) as a free parameter and impose that A 3 = 0 and A 2 > 0. Solving the simple system of equations will give the results plotted in figure 2. As we see the energy of such systems would be higher than the ground state energy of infinite nuclear matter but will become comparable for heavier odd-odd clusters. They could appear at lower densities since their energy is larger than those of nucleons, but they will have to compete with even-even clusters. At higher densities they are not favored by energy considerations and will break into nucleons. At even higher densities, nucleons will break into quarks and gluons. Now we can consider the situation where nucleons could group into clusters of 8, 10, 12...particles. In this case those clusters contain identical particles, thus the constituents have a Fermi motion. The cluster is now a Boson like and forcing two of such Bosons into the microscopic phase space volume does not call for antisymmetrization if we neglect the difference of the Fermi momenta of those clusters and the Fermi momentum of infinite nuclear matter. Of course such an approximation becomes more and more exact with increasing size of the cluster. As a result we can now neglect the Fermi motion in eq.(1) and calculate the EOS for heavier clusters. The results are plotted in figure 3 and compared to the CCSδ3 EOS. Now the coefficient A 3 > 0 for all plotted particles including 10 B which was not the case when such a cluster was considered as a d-like system. Also in this case the basic properties of the infinite nuclear matter can be recovered which means that the ground state is degenerate even in this representation. Very importantly, a Bose condensate is now possible starting with the 8 Be cluster . We stress that we would obtain a state of larger energy if we would group 8,10,12.. identical nucleons. Thus the transformations which lead to such states are not commutative. It is interesting to note that in all the physical situations addressed in this work, the compressibility is very similar, figures 1-3, and systematically lower than the Fermi gas compressibility. We expect the difference between the Fermi gas result and the strongly interacting cases to decrease with increasing temperatures. Thus the Fermi gas compressibility could be a good approximation when deriving temperatures and densities from multiplicity and quadrupole fluctuations in dynamical nuclear systems [9]. The ideas discussed in this paper could be investigated experimentally in heavy ion collisions. The possibility of describing nuclei as α clusters is quite well established [1]. Some progress has been done recently [10] in cluster formation in low density nuclear matter at finite tempera- tures. Efforts are being made to search for boson condensates in nuclei [10,11]. A confirmation of some of the ideas discussed here would be the determination of a condensate of 8 Be or higher cluster sizes as discussed above. Different choices of the interaction, for instance strong momentum dependent potentials [10], might give an α condensate as well differently from our predictions. To distinguish among different models a possibility would be to compare multiplicity fluctuations of protons and α [9]. Those fluctuations are quite different for Fermions or Bosons [3] thus could distinguish between the two cases. If the fluctuations turn out to be the same ( apart some Coulomb effects that could be determined by comparing neutrons, tritons and helions multiplicity fluctuations) it could be a signature that α particles do not form a condensate in nuclei as suggested by some authors [11]. Thus an experimental investigation of multiplicity fluctuations in the same experimental conditions could be quite interesting, for instance in 40 Ca + 40 Ca at beam energies around the Fermi energy or below. Similar ideas could be applied to pions and kaons and other bosons produced in ultra relativistic heavy ion collisions. Quarks and gluons could play the role of the nucleons in nuclear matter while hadrons would be clusters of mixed symmetries.
2011-05-16T14:50:54.000Z
2011-05-16T00:00:00.000
{ "year": 2011, "sha1": "9ef6b5e1cf6bf7f8bba917aabd05c074438b01fa", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1105.3110", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "9ef6b5e1cf6bf7f8bba917aabd05c074438b01fa", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
158552001
pes2o/s2orc
v3-fos-license
T. G. FESENKO GENDER MAINSTREAMING AS A KNOWLEDGE COMPONENT OF URBAN PROJECT MANAGEMENT Розглянуто перспективи включення концепту «гендер» в соціально орієнтовані міські проекти на рівні процесів планування та проектування. Гендерне планування орієнтоване на перетворення міського ландшафту у напрямку гармонійного поєднання просторових прав та можливостей жінок і чоловіків, формування «якісних життєвих просторів». Зазначається, що теперішня політика українських міст, як правило, є «гендерно сліпою». Міський простір містить певну гендерну асиметрію через різний рівень забезпечення «права на місто». Запропоновані гендерночутливі індикатори міського планування, які можуть чинити перетворюючий вплив на міську активність бенефіціарів різних гендерних груп (гендер+). Розроблено інтегровану структуру для гендерно чутливого міського планування, яку протестовано на прикладах проектів України, зокрема ландшафтного озеленення, благоустрою прибудинкових територій. Ключові слова: управління знаннями, міський проект, гендерна різноманітність, гендерно-чутливе планування, гендермейнстрімінг, дружній міських простір. Introduction. The Ukrainian industrial cities have been experiencing an infrastructure crisis. There are many abandoned large areas previously occupied by production sites. Conventional urban planning has been characterized by cities being divided into zones intended for specific activities, with houses, markets, and factories in separate locations. Nowadays places for the comfortable life of citizens should be the key issue in the urban environment instead of production sites and adjacent areas. The landscape of the contemporary city becomes the environment where human reproduction forms creating and deploying a human personality. The urban development strategy envisages the creation of good quality of life for all residents. The urban spaces are transformed into a field of possibilities for the formation of creative busy life of each of its inhabitants [1]. Increased attention to the needs and requirements of the citizens can be successfully implemented only if the various realities of life as a result of different requirements and needs of women and men are included in all products and services of the city. Gender and the city mutually influence and shape each other. Today, cities designed along these lines no longer conform to the reality of people's lives, both women and men [2]. Urban gender inclusion makes it possible to carry out investigations of urban spaces design from the perspective of the women and men needs. A gender perspective on design quality urban space has to focus on «gender +». It is not rigid categories of «women» and «men» but it notes the respect of residents in all their diversity. The traditional urban spaces are planned from a supposed equality, which results in an unfair and uninhabitable for most people. Planning and organization of urban areas usually show gender blindness in the Ukrainian cities (equal opportunities for women and men in access to services offered by the municipality have not been provided, quality daily lives of all the inhabitants of the city have not been supported). In the practice of Ukrainian cities, there are «hidden» discriminatory practices affecting the everyday life of ordinary Ukrainiansfemale and male. In Ukrainian cities, where more than 70% of the population live, there is gender discrimination in the right to access to quality living space «urban areas». There is gender asymmetry in the living city space, particularly in the way of providing the right to access to the «urban spaces»the time and money spent to access: recreational areas (gardens, parks), educational and cultural establishments, health care facilities, shopping facilities, etc. The gender approach to city politics should involve the formation of friendly urban space (for children, women). Equal access to the public spaces of the city by all residents, regardless of gender, age, etc. is crucial for sustainable development. Urban development is seen as a process of creating gender justice, with users' spatial requirements coming to the fore and becoming the starting point in the development of concepts and models for the future of urban structures of space and settlement to meet the concepts and strategies of sustainable development. Applying a gender perspective to urban planning is essential for thinking, designing and cities considering the diversity of experiences and needs which the population has. However, the gender approach is used by fragmentary, without reference to a system of quality indicators of gender diversity. That is a way the development of models to gender sensitive transform of «urban landscape» is importance for the methodology of urban project management. This article is aimed to present the construction of the gendered framework for urban design project management and to test it with examples within the Ukraine context. Achieving this objective presupposes fulfilling the following tasks:to determine the gender components on everyday life city residents;to highlight the issue of including the gender parameters into the system of urban project knowledge. Urban planning as a gender issue. The main proposition of international urban gender studies demonstrates the close connection between urban development and gender relations. There are many theoretical and practical works from different disciplines that incorporate gender perspective to urban studies. Social scientist V. Gutierrez [3] deals with «indicators» for urban living conditions of women and men. Her indicators are based upon gender-sensitive spatial analyses uncovering and making aware the androcentric reflex in spatial planning. To create a critical mass for the topic led to research of the European network «Gender, Diversity and Urban Sustainability (GDUS)» [4]. Moreover the handbook «Gender Mainstreaming in Urban Planning» contains a review of the vast practical experience in implementing the Strategy of Gender Mainstreaming made in Viennese city planning over the past 20 years. It [5, p. 5] determines: «Gender equality remains an important topic, as there are inequalities that are related to a person's gender. Gender mainstreaming, a strategy that is also prescribed by the European Union, aims to counter these inequalities. The objective is to take into account the living and working conditions of women and men in planning, implementing and evaluating measures. Only if we recognize and consider these differences can we avoid unequal treatment». UN-HАBIТАТ is an organization that works from a gender perspective and seeks to account for women's everyday life experience. This vision, inclusive with the rest of our society, considers the participation as an essential instrument in projects and sustainability as basic criteria of development [6]. Currently, emerging efforts exist of gendering evaluation in the field of urban planning and development. There are a diverse group of women architects and urban planners, interested in rethinking cities, neighborhoods, and architecture in order to eliminate gender discrimination [7]. They work to build cities that reflect the diversity of our society by creating inclusive spaces. New perspectives and potentials are offered by the use of the concept «gender diversity» (gender+). Gender is grown historically and is socially constructed, and can, therefore be changed. Gender refers to socially and culturally dominated gender roles. Gender diversity includes the further differentiation including age, ethnicity, physical ability, sexual orientation, class etc. which is also social constructs and therefore changeable. Gender diversity means to consider and promote different skills, different resources and potentials of women and men in their diversity as equivalent. The physical environment of the city has been presented by the different types of spaces, as it expresses ways to use existing locations of different gender groups. J. Beall [2, p. 11] said: «Stereotypical notions of nuclear families ± with male breadwinners journeying across town to work, and women as housewives caring for their children and elderly relatives in residential neighborhoods have never applied in some situations, and in others no longer apply. The separation of home, work, and leisure is being challenged in cities, as women and men work to transform the urban environment». Perception of the city by different social groups depends on their position in it, and folds in the practical development of the urban environment, and fills it with new social values. It is important that the city creates conditions for the everyday lives of women, especially for those who work and have small children. Cities have to offer qualitative spatial conditions for families to take into account the needs of parents (especially mothers) in child care. For example, women are more likely to use public transport, including traveling with a child in a baby carriage. Women, regardless of whether busy career they have or those who are unemployed, single or those who have a family, continue to be responsible for most domestic tasks: childcare, care for the elderly in families, shopping for the family and so on. All these things put pressure on their daily lives. The city can improve the conditions of daily life for women by developing gender-sensitive infrastructure areas: recreation areas (gardens, parks), institutions of education, culture, healthcare institutions, shopping facilities and more. Also, the focus on urban areas where women have a sense of insecurity at different times of day should be determined. Therefore there is an urgent need for more specific spatial planning of cities because it is important to address discrimination against women. Besides, it is crucial to question the «ideal» guiding principles in planning and the values underlying the planning philosophy with a view to gender equality [8]. Further work is needed to develop objective and easily usable tools. Gender features of urban space visions. Urbanism defines a city as a place of mobility as a stream of everyday practices, and which distinguish cities between their repetitive phenomenological grounds [9]. The focus of localization in defining space depends on what person «scale presence» we are interested in. Town planning ISSN 2311-4738 (print) ISSN 2413-3000 (online) Стратегічне управління, управління портфелями, програмами та проектами represents the image of the town as experienced by its citizens and its visitors. The concept of urban space is created by a complex impression: location, size, relief items etc. In summary, spacea place that is practiced. Thus, the street that was geometrically defined by planning, transformed into space by passers. At a time when a person moves in a particular segment of the street or riding on public transportationimpressions of the place will always be endowed with emotional connotations that can be transferred to the general attitude to the whole area around this point. Understanding of the city assumes the integration of two levels of urban space: on the one hand, there is the area of the citybuildings, squares and streets on the other people who use all these elements in town and give them meaning. So the city as a complex entity that is experienced [10], requires alternative descriptions and maps («psychology geography» of urban spaces), including gender. D. Parsons claimed that the city has always taken in conjunction with the emphasis on its personal live [11, p. 223]. In her study, she described Paris and London in the period between 1880 and 1940 years and demonstrated what does it mean to be a woman in a city that is for her «most promising, sometimes unbearable, but never overpowering, providing a space in which woman can realize her identity and have her own author's voice» [11, p. 228]. Women were often stereotyped by a selective eye, «replacing women through various kinds of violence to the field of household, to the world of shopping, to the inner world of the sexual body». Nowadays, women's urban consciousness, experienced women who care about their daily chores becomes of crucial concern in urban planning. For the formation of gender competence in the field of urban planning, municipal employees are offered special training [12, p. 28-29]. The author of this study was conducted gender training for municipal staff of Kharkiv and Chuguiv cities where gender mapping was applied. The idea of gender mapping is in taking the perspective of women and men. «Men Maps» trying to convey a dynamic image, to show the space that is absorbed by movement [13]. Thus, the use of transport mediates the relationship of time and space, as a result distance on the territory is starting to understand though temporal terms. Imaginary routes can stretch or shrink depend on the convenience of car travel. Features of women urban space perception also could be explained by their specific social purpose, their involvement in reproductive labor (concern for others) [14, p. 33]. Women think about schools, hospitals, shops, recreation areas (parks). Thus, the physical and institutional landscape of the city becomes part of the gender mapping. The town's decision makers should plan holistically to ensure the appropriate and accessible local provision of: -public services (post offices, schools, nurseries, hospitals, social services); -cultural and sports centers (cinemas, theaters, auditoriums, libraries); -recreational facilities (parks, after school clubs, youth centers). Also, modern urbanism revives the tradition view of the city from short (street) distance. The way streetscape is designed and looked after can have an important impact on the lives of women, for example: -good lighting of streets and public places can help women feel and be more secure; -pavements should be clear of obstacles and wide enough for pushchairs, wheelchairs, etc. The town can help women to balance their private and family lives with their professional life by planning services to facilitate their daily chores, for example: ensure the provision of childcare facilities and nurseries [15, p. 60-61]. Family-friendly city. An effective tool for identifying spatial pattern of gender sensitive placement of objects is social mapping. Visibility and accessibility for the end user are the central advantages of this method. Gender card can serve as a tool for monitoring urban processes. Gender indicators for mapping can be: -existence and development of infrastructure for the needs of the family: child care, as well as services such as healthcare and education, including kindergartens, playgrounds, and their location; -family leisure places the age and gender features of different population groups. Kyiv became the first city in Ukraine where the gender mapping was used to mark family-friendly public spaces (Fig. 1) on three categories: «friendly to babies»: include ramps for carriages, lingering tables or rooms for child care; «kid-friendly»: include children's rooms, chairs for children, children's menu, children's playgrounds (both external and indoors, etc.); «friendly family»: institutions that include everything you need to stay with children of all ages. Gender marking through an interactive system is represented by the form «participatory design» [16], where each resident may be involved in the assessment and monitoring urban space. Gender aspects of children rights and interests in the urban infrastructure. In recent years, a network of «Child-Friendly Cities», initiated by UNICEF has been created. Thus stimulating municipal authorities together with communities to change the urban environment, making it easier and safer for children [17, p. 73]. These cities committed to fulfilling children's rights, including their right to: -walk safely in the streets on their own; -meet friends and play; -have green spaces for plants and animals; -live in an unpolluted environment. Ukraine made the first steps along the way. An interest of the children has been identified as the top priority of local government. 17 cities have joined the Child-Friendly Cities Initiative (CFCI) to date: Bilopillya, Chervonohrad, Drohobych, Horlivka, Kharkiv, Korosten, Krolevets, Lebedyn, Lviv, Odesa, Romney, Shostka, Simferopol, Sumy, Trostyanets, Vinnytsya, Yevpatoriya (Fig. 2). When children are playing, the street becomes their street, the square their square, the district their district, the city becomes their city and their domain. The city must create child-friendly places, as for boys and for girls. Gender mainstreaming is not an end goal in itself but a means to achieve equality, this approach in urban planning is focused on the integration of gender equality As the starting point for gender analysis of urban planning in Ukraine «Program of landscape improvement of Kharkiv city» was chosen. (Kharkiv is the second-largest city of Ukraine, located in the north-east of the country. Its territory is 350 square kilometers, the population is 1,461,300. 70% of its residents live in about 10 thousand high-rise buildings, most of which were built during the Soviet era). According to the Ukrainian state building codes [18, § 8.6.1], children playgrounds installed for children up to 12 years, and for teenagerssports and play complexes. In general, children's playgrounds are gender-neutral (they are used equally by both boys and girls). Municipality sets in the yard of the municipal property standard systems of five elements, which are suitable for children with different physical abilities on one play space. Although children's playground set of game elements in Kharkiv, in our opinion, is not optimal (slide, playpen, swing, rocker and table tennis) - Table 1. Such playgrounds are not equally targeted on all age groups (Fig. 3). Ideally, the platform should benefit equally children of all ages. However, in the development of children playground projects often ignored the needs of parents of preschool children (under Ukrainian law children under 7 years must be accompanied by adults). Therefore, the space of the playground should be gender-sensitive in relation to the category of «parents of children up to 7 years» (necessarily include benches for adult supervision of children). As for children 12+ category, their need for space for physical activity in the yards of multi-storey buildings significantly less satisfied than boys of their age. Available in Kharkiv, sports and playgrounds are focused to a greater extent on boys [19][20]. In recent years, improvement of sports and gaming systems implemented within the framework of Ukraine's preparation for the finals of the European Football Championship «Euro 2012», and now -«Eurobasket 2015». Existing (football) and planned (basketball) playgrounds courtyards of apartment buildings are oriented to a greater extent on the active leisure boys than for girls. Therefore, the focus of urban planning, as basic infrastructures for physical activities, must be taken into account equally as the need for preventive health, both girls and boys. The policy of protection of rights and interests of children in urban infrastructure also includes the organization of the environment. The considering point of view of green spaces availability for children, who are living in Ukrainian cities, the following infographics is indicative (Fig. 4). In the «green» rating of Ukrainian Kharkiv and Odessa cities have low rates, according to international standards, for which 1,000 citizens should have to at least 2,1 hectares of green space. Fig. 4 -Green spaces in cities of over one million in Ukraine Gender Mainstreaming in neighborhood planning. For gender fair urban planning it is proposed to consider the gender composition of apartment buildings residents. According to the authors, gender indicators must be developed on a micro-territorial level, which allowed a higher depth in qualitative aspects. The scale of assessment of these indicators is the neighborhood, as space next to houses and the main stage where daily life unfolds. The following gender groups of beneficiaries have been determined in the use of the near home area: -adults (moms, dads, grandparents, etc.) with preschool children (under 6 years); -adults with children of primary school age (age 6-12); -children up to 3 years; -boys and girls of preschool age (age 3-6); -boys and girls of primary school age (age 6-12); -teenage boys and girls (age 13-17); -young people (boys and girls age 18-35); -adults (men and women (36+); -residents and residents of the «third age» (pensioners); As well, regardless of age, residents of apartment buildings can be distinguished on following groups: -family (residentsfemale and male), who have their own vehicles; -family (residentsfemale and male), who have pets (dogs); -residentsfemale and male of low-mobility (people with disabilities, parents with children in strollers). Despite the fact that each social group has their own needs and expectations in the improvement of the near home area some criteria that are important for all can be highlighted: 1) security (lighting, house signs, no stray animals); 2) the conditions for cultural, social, leisure sports (developed infrastructure of children playgrounds, sports fields, benches and (or) gazebos for board games, etc.); 3) ecology (equipped areas for collection of solid waste, «green» areas, flower beds, insulation, space for dog walking, etc.). «Specific» indicator (specific to a particular group of beneficiaries) are presented in Table 2. where Вdeterminant matrix evaluation of the project beneficiaries; mnumber of beneficiaries groups; x -number of indicators, which assess the project beneficiaries [21]. B Such an approach could be the basis for a decisionmaking on the choice of options of public services improvement project (children's, sports and play complexes, the park area, etc.). The inclusion of Gender Audit allows reveals/identify specific requirements of the beneficiaries to the projected area and creates a platform for the multicriteria selection of the most gender-sensitive projects (Fig. 5). Fig. 5 -Integration of Gender+ interests in the neighborhood planning Conclusions. Summarizing the foregoing, the following conclusions are offered: 1. Ukrainian urban planning on the micro-territorial level (spaces next to houses, as the main stage where daily life unfolds) is not included gender needs are not taken into ISSN 2311-4738 (print) Стратегічне управління, управління портфелями, програмами та проектами ISSN 2413-3000 (online) account in the spatial design. There is a methodological problem of the incorporation of gender parameters into urban projects (architectural, infrastructural, and design). 2. Gender mainstreaming approach in urban project management is focused on the integration of gender equality in all stages of the planning process: from formulating the objectives to planning the measures and to implementing and evaluating them (design, implementation, monitoring, and evaluation). 3. It is necessary to develop a gender-sensitive framework and to experiment with cases of urban planning which might have a transformative impact on the beneficiaries gender+ practices. In particular, gender criteria and instruments for green landscaping, neighborhood design projects. Moreover, it is important to incorporate family friendly criteria and instruments in urban infrastructure for support of high-quality cross-generational and lifestyle. 4. The orientation at what is needed for a good life of all city residents entails the necessity of the integration of the gender perspective in every stage of the urban process. Gender mainstreaming approach in urban planning must be cross-cutting: from formulating the objectives to planning the measures and to implementing and evaluating them (design, implementation, monitoring, and evaluation). Thus, the integration of «gender indicators of quality of urban space» in municipal politics of Ukraine cities is able to transform the «urban landscape» on the balanced combination of rights and opportunities for women and men to «quality living spaces of the city». Also of note is the complexity of gendering evaluations of urban planning requires further elaboration of knowledge urban project management.
2019-05-20T13:02:50.715Z
2017-03-01T00:00:00.000
{ "year": 2017, "sha1": "918c37418b26e3145f73f333032420c9bb5ed255", "oa_license": "CCBYNCSA", "oa_url": "http://pm.khpi.edu.ua/article/download/2413-3000.2017.1225.4/91333", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4d8cb71b8933d2c5d18e95560735fc8be2f71192", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Political Science" ] }
24769438
pes2o/s2orc
v3-fos-license
Intercellular adhesion molecule 1 is a sensitive and diagnostically useful immunohistochemical marker of papillary thyroid cancer (PTC) and of PTC-like nuclear alterations in Hashimoto's thyroiditis Intercellular adhesion molecule 1 (ICAM-1) is important in the progression of inflammatory responses. Recently, increased levels of ICAM-1 have been reported in a number of types of malignancy. The present study aimed to investigate ICAM-1 expression in papillary thyroid cancer (PTC) and in Hashimoto's thyroiditis (HT) with PTC-like nuclear alterations, and to assess the predictive value of ICAM-1 in thyroid lesions. ICAM-1 expression was retrospectively investigated in 132 consecutive cases of PTC, 72 cases of HT, 10 of follicular cancer, 15 of follicular adenoma, 16 of nodular goiter and 8 samples of normal thyroid tissue using immunohistochemical analyses, and in 42 PTC patients using western blotting. ICAM-1 expression was not detected in normal follicular cells, follicular lesions (adenoma and cancer) and benign nodular hyperplasia, but was frequently overexpressed in PTC cells. ICAM-1 overexpression was associated with extra-thyroidal invasion and lymph node metastasis; no association was found with age, gender, tumor size, multifocality, pathological stage, recurrence or distant metastasis. ICAM-1 expression in HT patients with PTC-like nuclear alterations was significantly higher than that in HT cases with non-PTC-like features. Compared with antibodies against cytokeratin 19, galectin-3 and Hector Battifora mesothelial-1, ICAM-1 was the most sensitive marker for the detection of PTC-like features in HT. These findings demonstrate that ICAM-1 expression is upregulated in PTC and in HT with PTC-like nuclear alterations. This feature may be an important factor in the progression of cancer of the thyroid gland. Introduction Papillary thyroid cancer (PTC) is the most prevalent type of malignant tumor of the endocrine system and accounts for 70-80% of all diagnosed cancers of the thyroid gland (1). Histopathological diagnosis of PTC is effective in the majority of cases. However, the diagnosis of rare variants of PTC and of Hashimoto's thyroiditis (HT) with PTC-like nuclear alterations is challenging. Intercellular adhesion molecule 1 (ICAM-1) is a transmembrane glycoprotein receptor and a member of the immunoglobulin superfamily of adhesion molecules. It is expressed on the surface of various cell types, including endothelial cells, leukocytes (with the exception of basophilic granulocytes), T cells, B cells and fibroblasts. ICAM-1 is responsible for the arrest and transmigration of leukocytes out of blood vessels and into tissue, as well as the formation of immunological synapses during T cell activation (2). Recently, a number of studies have reported that ICAM-1 is present in several types of cancer, including prostate, breast and oral cancers (3)(4)(5), and is involved (at least in part) in their progression. Few reports have found ICAM-1 expression to be elevated in PTC (6,7), and the prognosis and clinical significance of ICAM-1 remain unclear, particularly in certain histological types of thyroid lesions (e.g., HT with PTC-like alterations). The present study sought to validate ICAM-1 as a sensitive immunohistochemical (IHC) marker to distinguish PTC from different diseases of the thyroid gland, and to estimate the predictive value of ICAM-1 by studying the aggressive behavior of PTC. Materials and methods Ethical statement. The present study was approved by the Research Ethics Committee of Shandong Provincial Hospital Affiliated to Shandong University (Jinan, China) and written informed consent was obtained from all patients. Clinical data. The study cohort comprised 245 consecutive patients (171 women and 74 men; age range, 28-75 years; mean age, 42 years). Of these, 132 had primary PTC, 10 IHC procedure. Sections (thickness, 4 mm) were cut from formalin-fixed, paraffin-embedded blocks, and subsequently deparaffinized in xylene and rehydrated using a series of graded washes with ethanol. After inhibition of endogenous peroxidase and antigen retrieval (microwave irradiation in 0.01 M citrate buffer at pH 6.0), sections were incubated with each primary antibody at 4˚C overnight, followed by incubation with horseradish peroxidase (HRP)-conjugated secondary antibodies (dilution, 1:1,00; Dako) for 1 h at 4˚C. Slides were developed for 5 min with the chromogen 3,3'-diaminobenzidine, counterstained with hematoxylin to distinguish the nucleus from the cytoplasm, and evaluated under a microscope (BX51; Olympus Corporation, Tokyo, Japan). Normal tonsil tissues (which are known to express ICAM-1) were used as positive and negative controls after being stained with or without primary antibodies, respectively. All assessments were undertaken in three separate experiments, and one representative assessment of three different experiments is shown. Expression of galectin-3 in tumor cells was defined as 'positive' if cytoplasm was stained and occasional staining of the nucleus was also observed (10). Expression of CK-19 and HBME-1 was evaluated by staining of the cell membrane, and TPO by staining of the cytoplasm, respectively. The intensity of staining for each antibody was scored as follows: Negative, 0; weak, 1+; moderate, 2+; strong, 3+. Positive immunostaining was defined as a staining intensity of 2+ to 3+ in >10% of cells. There were no differences in opinion between the two pathologists. All assessments were conducted in three separate experiments, and one representative assessment of three different experiments is shown. Statistical analyses. Data were analyzed by SPSS software version 16.0 (SPSS, Inc., Chicago, IL, USA). The χ 2 test was used to calculate the statistical significance of the variables. P<0.05 was considered to a indicate statistically significant difference. ICAM-1 expression in different diseases of the thyroid gland. Expression of ICAM-1 in 245 thyroid samples from patients who underwent surgery of the thyroid gland were investigated by IHC analyses. This revealed that 85.6% (113/132) of PTC samples and 18.1% (13/72) of HT samples were positive for ICAM-1, whereas all samples of follicular cancer (n=10), follicular adenoma (n=15), nodular goiter (n=16) and normal thyroid (n=8) were negative (Table I). As shown in Fig. 1, ICAM-1 expression was observed in the cell membrane and cytoplasm. Notably, in well-differentiated PTC tissues, ICAM-1 expression was detected at the apical surface of the papillary region and the thyroid gland ( Fig. 1B-C). Expression of ICAM-1 was increased significantly in the PTC group compared with the other groups (P<0.001), and was also markedly higher in the HT group than in the other groups, with the exception of the PTC group (P<0.001). ICAM-1 expression in PTC assessed using western blotting. In 42 PTC samples and paired non-tumorous thyroid tissue obtained from the same patients, ICAM-1 overexpression was found in 36 PTC samples in comparison with non-tumorous tissues (P<0.001) (Fig. 2). Western blotting results for ICAM-1 corroborated the immunostaining data (in which 34 of the corresponding 42 samples demonstrated positive immunostaining), and the agreement with IHC results was 94.4% (data not shown). Association between ICAM-1 expression and clinicopathologic features of PTC. The associations between ICAM-1 expression and clinicopathological features of patients are shown in Table II. ICAM-1 expression was not correlated with age, gender, tumor size, multifocality, pathological stage, recurrence or distant metastasis (P>0.05). However, ICAM-1 expression was associated with extra-thyroidal invasion (P=0.015) and lymph node metastasis (LNM; P=0.027). Of the 132 PTC patients, 47 had metastasis in the cervical lymph nodes, 35 had local recurrence, and 17 had distant metastasis (8 with lung metastasis, 4 with brain metastasis and 5 with bone metastasis). High expression of ICAM-1 was detected in the tumor cells of the 4 patients who died of distant metastasis. ICAM-1 expression in HT patients. Of the 72 HT cases, 22 exhibited ICAM-1 expression. Contralateral thyroid cancer was diagnosed in 5 of these cases at 8-63 months after surgery, and LNM was detected in 2 of these 5 patients. Of 50 ICAM-1-negative patients, only 4 had contralateral thyroid cancer, and LNM was detected in 1 patient. Among the HT cases, the HT-to-PTC progression rate of ICAM-1 positive patients was significantly higher than that of ICAM-1 negative patients (22.7 vs. 8%). Of 72 HT cases, 21 exhibited PTC-like features, 13 of which were positive for ICAM-1. Only 9 patients expressed ICAM-1 out of 51 with non-PTC-like HT (Table III). Thus, ICAM-1 expression in patients with PTC-like HT was significantly higher than that of cases with non-PTC-like HT (P<0.001). To evaluate the diagnostic value of ICAM-1 expression for the detection of PTC-like alterations, antibodies against CK-19, galectin-3, HBME-1 and TPO were employed. The HT was set to be investigated for PTC-nuclear change, and was confirmed on the hematoxylin and eosin slides as the 'golden criteria'. The specificity and sensitivity of different antibodies and antibody combinations were calculated (Fig. 3). Galectin-3 was the most specific (71.4%) single antibody, whilst HBME-1 had the lowest specificity (38.1%). ICAM-1 was the most sensitive marker for the diagnosis of PTC-like features (82.4%). With respect to antibody combinations, co-expression of any two or three antibodies increased the specificity of the diagnosis of PTC-like features to >70% (range, 71.4-85.7%), with three panels being 85.7% specific (CK-19/galectin-3; ICAM-1/CK-19/galectin-3; and CK-19/galectin-3/HBME-1); however, the sensitivity of the three antibody combinations was significantly lower than for a single antibody (Table IV). Discussion PTC is the most prevalent manifestation of cancer of the thyroid gland, representing 70-80% of all cancers of the thyroid gland (11). Examination by B-ultrasound has become the first choice for auxiliary examination of thyroid nodular disease, whilst pathological examination remains the 'gold standard' for the diagnosis of cancer (12). However, differentiating PTC from benign papillary hyperplasia of the thyroid gland based on its morphology (particularly if it exhibits PTC-like nuclear alterations) is challenging. Hence, identifying sensitive and specific IHC markers to differentiate between benign thyroid nodular disease and PTC is urgently required. CK-19, HBME-1 and galectin-3 have been demonstrated to show higher expression in PTC than in benign follicular lesions of the thyroid gland (13); however, results among studies have varied (14,15). The present study assessed ICAM-1 expression and evaluated the diagnostic importance of PTC, as well as the potential value of measuring ICAM-1 expression in HT with PTC-like features. In healthy individuals, ICAM-1 is expressed at low levels on various cell types, including endothelial cells, fibroblasts, and certain types of leukocyte (2). Recently, it has also been reported to play an important role in promoting progression in various types of cancer. Hayes and Seigel (2) observed ICAM-1 expression in 300 tissue cores from multiple arrays of normal, malignant and metastatic tissues by IHC analyses. They observed ICAM-1 expression to be associated with various cancer types, and it appeared to play a part in cancer metastasis (2). Several studies have demonstrated upregulation of ICAM-1 expression in PTC (16,17). Buitrago et al (7) identified expression of the ICAM-1 gene to be higher in PTC and LNM when compared with benign tumors. In accordance with those results, 113 of 132 PTC samples exhibited overexpression of ICAM-1 in the present study, whereas no cases of follicular cancer, follicular adenoma, nodular goiter or normal thyroid tissues were immunoreactive. A constant diagnostic challenge occurs when differentiating the follicular variant of PTC from follicular lesions (adenoma and cancer). In the present study, 10 of 16 follicular-variant PTC cases exhibited moderate to high expression of ICAM-1, supporting the notion of ICAM-1 as a specific marker in differentiating between follicular-type lesions in thyroid tissues. Furthermore, ICAM-1 expression was demonstrated to be associated with certain clinicopathological characteristics of patients. The growth pattern of the majority of PTC cases expressing ICAM-1 tended to have extra-thyroidal invasion and LNM, suggesting aggressive behavior. ICAM-1 has been demonstrated to facilitate the spread of metastatic cancer cells via the recruitment of inflammatory cells, by stimulating their proliferation, angiogenesis and invasion (18). However, the underlying mechanism of this action remains unclear. The association between HT and PTC is controversial. The prevalence of cancer in HT has been reported to range from <1 to 32% (19-21). Jankovic et al (19) undertook a systematic review of original studies that investigated the correlation between HT and PTC. Notably, studies based on fine-needle aspiration biopsy reported no link between HT and PTC, whereas many of the studies using thyroidectomy specimens revealed a positive association. Several authors have postulated that the inflammatory response may cause DNA mutations that eventually lead to the development of PTC (22,23). Certain studies have observed a higher risk of PTC in patients with HT, particularly those who harbor focal PTC-like nuclear alterations in thyroid epithelial cells (e.g., nuclear overlapping, enlargement, chromatin clearing, intranuclear grooves and inclusions), which may be observed in almost one-third of HT cases on routine microscopic examination (24)(25)(26). A number of studies have reported that focal PTC-like changes suggest the possibility of focal, early premalignant transformation in some cases of HT, which eventually lead specifically to PTC (27,28). It is likely that there is a morphological continuum between PTC-like thyrocytes, follicular hyperplasia and metaplasia of Hürthle cells, and the reliability of valuable IHC markers is uncertain (29). In this work, 21 of 72 HT cases had features of PTC-like nuclear changes, most of which exhibited clusters and Table III Table II. Correlation between clinicopathological features and ICAM-1 expression in papillary thyroid cancer patients (n=132). (24) noted focal expression of galectin-3 (87%), CK-19 (65%) and HBME-1 (26%) primarily in nodules in HT cases, thereby demonstrating unique morphological features that overlap with PTC, in accordance with the current results. The utility of CK-19 expression has been studied extensively and it has been hypothesized to be the most sensitive marker for the diagnosis of PTC, and galectin-3 is also believed to be a valuable marker for distinguishing PTC from benign conditions (13). However, CK-19 was observed in ~14% of cases of follicular adenoma, Golden criteria, n -- ------------------------------------------ antibodies in routine practice (particularly if dealing with questionable PTC features) must be addressed fully. The present study demonstrated ICAM-1 to be a sensitive IHC marker of PTC. Furthermore, positive staining with ICAM-1, if strong and diffuse, may be useful for distinguishing PTC-like alterations in HT from histological mimics. All 5 HT cases diagnosed with contralateral thyroid cancer following surgery were strongly positive for ICAM-1 expression, suggesting that ICAM-1 is a potential marker for predicting the possibility of progression to PTC in HT with PTC-like features. It was previously reported that expression of RET/PTC-1 and RET/PTC-3 oncogenes in patients with HT may identify HT as a pre-neoplastic lesion (25,26). Reports have also stated that p63 protein (30,31), and the loss of heterozygosity of 8-oxoguanine DNA glycosylase may be involved in neoplastic transformation from HT to PTC (32). To date, however, no affirmative genetic linkage has been confirmed. We suggest that ICAM-1 be used as part of a panel that may include CK-19 and galectin-3, depending on the differential diagnosis that is considered.
2018-02-10T12:18:10.242Z
2016-01-14T00:00:00.000
{ "year": 2016, "sha1": "fe142af30117b9f4859cf90a2050428d6e677b87", "oa_license": "CCBYNCND", "oa_url": "https://www.spandidos-publications.com/ol/11/3/1722/download", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "fe142af30117b9f4859cf90a2050428d6e677b87", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
237472612
pes2o/s2orc
v3-fos-license
Predictors of Occupational Burnout: A Systematic Review We aimed to review occupational burnout predictors, considering their type, effect size and role (protective versus harmful), and the overall evidence of their importance. MEDLINE, PsycINFO, and Embase were searched from January 1990 to August 2018 for longitudinal studies examining any predictor of occupational burnout among workers. We arranged predictors in four families and 13 subfamilies of homogenous constructs. The plots of z-scores per predictor type enabled graphical discrimination of the effects. The vote-counting and binomial test enabled discrimination of the effect direction. The size of the effect was estimated using Cohen’s formula. The risk of bias and the overall evidence were assessed using the MEVORECH and GRADE methods, respectively. Eighty-five studies examining 261 predictors were included. We found a moderate quality of evidence for the harmful effects of the job demands subfamily (six predictors), and negative job attitudes, with effect sizes from small to medium. We also found a moderate quality of evidence for the protective effect of adaptive coping (small effect sizes) and leisure (small to medium effect sizes). Preventive interventions for occupational burnout might benefit from intervening on the established predictors regarding reducing job demands and negative job attitudes and promoting adaptive coping and leisure. Introduction The etiology of occupational burnout remains unclear, although it has elicited considerable interest in occupational health sciences over the last few decades [1][2][3][4]. Occupational burnout can have adverse consequences not only at an individual level (e.g., physical and mental health problems) [5] but also at an organizational level (e.g., absenteeism, poor performance at work, misjudgments and errors, job turnover) [6]. From both an individual and an organizational perspective, the prevention of occupational burnout has been viewed as the best approach to deal with this phenomenon [7]. Due to a lack of consensus on how occupational burnout should be defined and assessed, identifying the determinants of the syndrome has been challenging [8,9]. The European Network on the Coordination and Harmonization of European Occupational Cohorts (OMEGA-NET) recently proposed a harmonized definition of occupational burnout accepted by a majority of 50 experts from 29 countries [10], together with a systematic assessment of the psychometric quality of five occupational burnout measures [11]. Such work has helped to resolve semantic and methodological issues in assessing occupational burnout, particularly by focusing on exhaustion measurement. Nevertheless, the etiology of burnout still needs to be clarified by considering all predictors studied in longitudinal prospective studies. Prior systematic reviews of predictors of occupational burnout [12][13][14][15][16][17][18][19][20][21][22] had some restrictions, either because they focused on a specific occupational group (physicians, nurses, mental health professionals) [12,15,20] or studied only job-related predictors [23,24]; or selected studies with a particular duration of follow-up between two measurement points in longitudinal studies [13]. The duration of follow-up between two measurement points is particularly critical because the latency of occupational burnout onset remains uncertain [10,[25][26][27]. Concerning the predictors of occupational burnout, several models have been commonly used in the literature. Along with the most prominent of these models, we found the job demand-control (JD-C) [28], the Demand-Control-Support (DCS) model [29], the Job Demands-Resources Model (JD-R) [30], and Effort-Reward-Imbalance (ERI) model [31]. Given the diversity of these models and uncertainty surrounding the predictors of occupational burnout, a systematic assessment including all longitudinally studied predictors, regardless of the underlying models, appeared essential, particularly for distinguishing between different types of predictors and assessing their respective effects. A reassessment of occupational burnout predictors is urgent for at least two main reasons. First, to resolve the between-study inconsistencies and conclude whether a given predictor has a protective or harmful effect on occupational burnout occurrence [32]. Secondly, it is important to know the level of evidence by a systematic analysis of all available findings, on all potential predictors, and in all occupations. We considered a quantitative synthesis for occupational burnout predictors focused on exhaustion the best approach following the OMEGA-NET harmonized definition of occupational burnout as a physical and emotional exhaustion state [10]. Additionally, exhaustion is the only characteristic of burnout that is recognized in all its conceptualization and operationalization [33][34][35]. It is also the only characteristic of burnout that is associated with decreases in objective job performance [36]. In such a context, unsurprisingly, many investigators have chosen to focus only on exhaustion when investigating burnout [12,[37][38][39][40]. Aims of the Current Study This study aimed to review occupational burnout predictors, considering their type, effect size, and role (protective versus harmful), and the overall evidence of their importance. Materials and Methods We followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) checklist [41] and the Synthesis Without Meta-analysis (SWiM) guidelines [42] for reporting this study. Protocol and Registration The protocol of this study is available on the international database PROSPERO with the registration number CRD42018105901 from: https://www.crd.york.ac.uk/PROSPERO/ display_record.php?ID=CRD42018105901&ID=CRD42018105901 (accessed on 17 August 2018). Inclusion and Exclusion Criteria We performed systematic searches for studies examining the predictors of occupational burnout. We included original research studies that examined the effect of any predictors of occupational burnout measured as exhaustion, whatever the instrument used. The included studies were written in any European language, had a longitudinal design enabling exposure assessment before the burnout assessment, and were conducted among active workers (minimum 50 workers per group). The reasons for exclusion were: 1-no full text could be found; 2-studies that only reported an overall burnout score and/or measures other than exhaustion; 3-studies where participants were not professionally employed (e.g., students); 4-Studies where no measure of the variability of the study's parameters and outcomes was reported (e.g., p-value or confidence intervals or the standard error of the mean). Data Sources and Search Terms The literature search was conducted over the period from 1 January 1980 to 8 August 2018 in three databases: MEDLINE, PsycINFO, and Embase via Ovid. We implemented the search strategy with the help of an experienced librarian; the full strategy can be found in Figure S1. We validated this literature search by achieving sufficient exhaustiveness of studies included in the latest systematic review on burnout, at the time of conducting the literature search, ref. [13] in working populations. In addition, we checked the reference lists from articles and reviews retrieved in our electronic search for any additional studies to include. In cases where we identified multiple publications describing a single study, we included the study only once, choosing one of the publications as the primary reference (the most complete one that included the latest follow-up) under which we listed all the others. We did not search the gray literature in order to avoid systemic bias and to guarantee the reproducibility and openness of our search and study selection strategy. Study Selection We used the bibliography software EndNote X8 to import the collected studies. Then two independent reviewers screened the imported references. The reviewers removed remaining duplicates within each database, and between databases before they started the screening process. They used the above-mentioned inclusion and exclusion criteria to retain or reject articles and documented their decisions in a standardized form designed specifically for this study. The reference screening was performed in two steps: the title and abstract screening and full-text screening. In both steps of the screening, the references were equally distributed between 14 reviewers, while a second independent reviewer examined all of them independently. All discrepancies between the two reviewers' assessments were discussed and solved by consensus, consulting a third reviewer when required. Data Extraction and Management We specially designed a standardized data extraction form in MS Excel, which we validated with a random sample of ten included studies. Five reviewers extracted the data independently, compared their data, discussed the discrepancies and flows, and improved the form until reaching an unambiguous valid format. The reviewers used this form for extracting data from studies assigned to them. The following data were extracted: study details (date of study, title, authors, and research question); methods (study design, primary outcome, predictor variables, exposures, potential confounders, and any other outcomes); participants population demographics (age, sex, socioeconomic background, and co-morbidities), inclusion and exclusion criteria and participation rate; outcomes (name and definition, how it was measured and reported), and statistics (beta coefficients from linear regressions, their standard errors (ideally), p-values or confidence intervals (CI), missing data and reasons for missing data). All extracted data were cross-checked by a second reviewer. Data Synthesis First, we sorted and grouped all predictors into families corresponding to similar constructs or using similar measures. This enabled us to synthesize the abundant amount of information and make each family of predictors as homogeneous as possible. For example, based on a review on job burnout [43], we considered two main families of predictors: situational and individual. Job characteristics and organizational characteristics were included in the former, whereas personality characteristics and work attitudes were included in the latter. Non-occupational factors were grouped based on the type of predictor. Moreover, at the intersection between work and personal life, we considered a third family of predictors, the work-life interface [44,45], which refers to factors of personal life that overlap with work factors or vice versa. Finally, we classified other variables, either considered as predictors of occupational burnout not included in the other three main families or as intermediate outcomes or consequences of some working conditions, such as stress or satisfaction, in a fourth main family named "Perceived intermediate work consequences". Secondly, we categorized predictors within each family into subfamilies in order that all predictors of one subfamily meet the conditions as follows: 1-related to the same or similar construct; 2-had the same theoretical valence/direction (e.g., two subfamilies "maladaptive coping style" and "adaptive coping style" instead of one subfamily "coping style"). Statistical Analysis In this analysis, we only considered the direct path showing the effect of each predictor on the outcome. We also considered only the unadjusted effects whenever possible. By dividing the effect estimate (beta coefficient) by its standard error, we calculated the z-score for each study and each predictor. If the uncertainty parameter associated with the beta estimate was a p-value or confidence interval, we applied a formula ( Figure 1) to convert them into standard errors. We plotted the z-scores per predictor type which enabled graphical discrimination of those associated with significantly increasing or decreasing occupational burnout rate. We further implemented the vote-counting method to identify the predominant direction of effect within a group of predictors [46]. In this analysis, the number of studies showing harmful effects was compared with the number of studies showing protective effects, regardless of the statistical significance [47]. The statistical significance of the predominant effect was then tested using the binomial test [46]. This method enabled us to test whether the subfamily effect was harmful (or protective) in less than 50% of studies. Finally, we computed effect sizes by extracting the correlation coefficients (for each exposure at time 1 correlating with the outcome at time 2), and then we used the formula suggested by Cohen [48]. An effect size less than or equal to 0.02, 0.15, 0.35 can be considered as "small", "medium", and "large", respectively. We used R 3.6.2 statistical software (R Foundation for Statistical Computing, Vienna, Austria) for generating z-plots and STATA version 16.1 (StataCorp. LP, College Station, TX, USA) for all other analyses. Risk of Bias Assessment We assessed the risk of bias of each study included in the synthesis using the Methodological Evaluation of Observational Research Checklist (MEVORECH) [49]. This checklist provides separate examinations of external and internal validities with the labeling of major and minor flaws or poorly reported data on the study methodology. We performed the assessment using an MS Excel standardized form to report all elements of the MEVORECH, which we further analyzed using STATA. This allowed us to calculate an overall risk of bias score for each study and classify the studies into three categories, as follows: high risk of bias (i.e., the score > 43); moderate risk of bias (i.e., scores between 36 and 43), and low risk of bias (i.e., the score < 36). This step is necessary to evaluate the overall risk of bias in studies of the same predictor or (sub)family of predictors when assessing the overall quality of evidence. studies showing protective effects, regardless of the statistical significance [47]. The statistical significance of the predominant effect was then tested using the binomial test [46]. This method enabled us to test whether the subfamily effect was harmful (or protective) in less than 50% of studies. Finally, we computed effect sizes by extracting the correlation coefficients (for each exposure at time 1 correlating with the outcome at time 2), and then we used the formula suggested by Cohen [48]. An effect size less than or equal to 0.02, 0.15, 0.35 can be considered as "small", "medium", and "large", respectively. We used R 3.6.2 statistical software (R Foundation for Statistical Computing, Vienna, Austria) for generating z-plots and STATA version 16.1 (StataCorp LP, College Station, TX, USA) for all other analyses. Risk of Bias Assessment We assessed the risk of bias of each study included in the synthesis using the Methodological Evaluation of Observational Research Checklist (MEVORECH) [49]. This checklist provides separate examinations of external and internal validities with the labeling of major and minor flaws or poorly reported data on the study methodology. We performed the assessment using an MS Excel standardized form to report all elements of the MEVORECH, which we further analyzed using STATA. This allowed us to calculate an overall risk of bias score for each study and classify the studies into three categories, as follows: high risk of bias (i.e., the score > 43); moderate risk of bias (i.e., scores between 36 and 43), and low risk of bias (i.e., the score < 36). This step is necessary to evaluate the overall risk of bias in studies of the same predictor or (sub)family of predictors when assessing the overall quality of evidence. Quality of Evidence Assessment We assessed the overall quality of evidence using the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) approach [50]. The GRADE consists of five domains: risk of bias; inconsistency; indirectness; imprecision, and publication bias. The reviewers started with the assumption that the quality of evidence from the studies on a predictor or (sub)family of predictors was high, and then they downgraded the evidence in cases of high risk of bias, inconsistency, indirectness, imprecision, and publication bias. The resulting overall level of evidence per predictor or (sub)family of predictors was labeled as: high, moderate, low, or very low based on the total GRADE score. Figure 2 Summarizes the study selection process. From 5297 identified references, 2935 were screened based on the title and abstract after duplicates, conference abstracts, and articles without abstracts had been removed. The rate of disagreement between reviewers regarding the eligibility of abstracts was less than 20%, and once solved, 443 ref- Quality of Evidence Assessment We assessed the overall quality of evidence using the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) approach [50]. The GRADE consists of five domains: risk of bias; inconsistency; indirectness; imprecision, and publication bias. The reviewers started with the assumption that the quality of evidence from the studies on a predictor or (sub)family of predictors was high, and then they downgraded the evidence in cases of high risk of bias, inconsistency, indirectness, imprecision, and publication bias. The resulting overall level of evidence per predictor or (sub)family of predictors was labeled as: high, moderate, low, or very low based on the total GRADE score. Figure 2 Summarizes the study selection process. From 5297 identified references, 2935 were screened based on the title and abstract after duplicates, conference abstracts, and articles without abstracts had been removed. The rate of disagreement between reviewers regarding the eligibility of abstracts was less than 20%, and once solved, 443 references were retained for the full-text screening. In this step, the rate of disagreement regarding the eligibility of studies was less than 9%, and once solved, 85 articles were finally included in the review ( Figure 2, and Table S1). Description of the Included Studies The included studies were conducted between 1993 and 2018 (Table S1), mainly in European countries (Europe 71%, North America 23%, and Asia 6%). Teachers (15%), healthcare and social workers (13%), nurses (11%), physicians (6%), and police officers (5%) were the most studied occupations, though 9% of studies were based on the mixed sample of different occupations. Regarding the used time lags, 31% of the 85 studies used time lags (the time between the measurement points or so-called waves in the longitudinal study) less than one year, 44% used one-year time lag, and only 25% used more than one-year time lags. Regarding the hypothesis tested, 17 included studies tested the strain hypothesis for the JD-C, JD-R, and JDCS models. Four studies showed that their results were consistent with the JDCS model [12,[51][52][53], whereas the results of two studies were in partial consistency (at least one dimension of the JDCS scale predicted exhaustion) [54,55]. Additionally, results from one study were not consistent with the JDCS strain hypothesis [56]. For the JD-C, we found four studies, with consistent [57], partially consistent [58], and not consistent results [59,60]. Among studies testing the JD-R strain hypothesis, four were in line with it [61][62][63][64], while three others were against [60,65,66]. erences were retained for the full-text screening. In this step, the rate of disagreement regarding the eligibility of studies was less than 9%, and once solved, 85 articles were finally included in the review (Figure 2, and Table S1). Description of the Included Studies The included studies were conducted between 1993 and 2018 (Table S1), mainly in European countries (Europe 71%, North America 23%, and Asia 6%). Teachers (15%), healthcare and social workers (13%), nurses (11%), physicians (6%), and police officers (5%) were the most studied occupations, though 9% of studies were based on the mixed sample of different occupations. Regarding the used time lags, 31% of the 85 studies used time lags (the time between the measurement points or so-called waves in the longitudinal study) less than one year, 44% used one-year time lag, and only 25% used more than oneyear time lags. Regarding the hypothesis tested, 17 included studies tested the strain hypothesis for the JD-C, JD-R, and JDCS models. Four studies showed that their results were consistent with the JDCS model [12,[51][52][53], whereas the results of two studies were in partial consistency (at least one dimension of the JDCS scale predicted exhaustion) [54,55]. Additionally, results from one study were not consistent with the JDCS strain hypothesis We also found six studies which examined the buffer hypothesis, five of which were negative. These studies concluded that high job control or high job recourses do not alleviate the harmful effect of high job demands [53,56,60,65,66]. Only the results from the study of Feuerhahn et al. were in line with the buffer effect hypothesis [51]. Regarding the ERI model, the results from two studies were in line with this model [54,67]. Predictor (Sub)Families and Associated Z-Scores In this review, we identified 261 predictors, which we grouped into four families and 13 subfamilies). Figure 3 depicts the content of each family of predictors, while Table S2 provides the definitions of predictors within each family and/or subfamily and their theoretical background. For each family and subfamily of predictors, we plotted z-scores calculated from studies investigating these predictors. Figure 4 shows that ten plots corresponding to 10 studies investigating at least one of the predictors belonging to the Job demands subfamily, Cognitive demands, and Physical demands subfamilies are presented together to facilitate the overall view of the z-scores distribution in this family of predictors. Z-score values higher than zero correspond to a positive association between the predictor and exhaustion, which is labeled as a harmful effect. Conversely, z-score values less than zero correspond to a negative association between the predictor and exhaustion, which is labeled as a protective effect. If the value of the predictor is outside the 95%CI (i.e., 1.96, −1.96; indicated by the dotted lines in Figure 4) then the effect is statistically significant. At zero, there is no association between the predictor and outcome (exhaustion). Figure 4 thus shows that within the Job demands subfamily, three studies [62,68,69] out of ten found a significantly harmful effect of high job demands overall with respect to exhaustion increase, and three studies [12,54,70] found this effect at borderline statistical significance. Gelsema et al. [52] found that physical job demand was harmful, while Korunka et al., found that Cognitive job demand was protective against exhaustion [71], and two other studies were inconclusive [61,63]. The complete set of plots for all (sub)families of predictors are available in Table S3. Job Demand, Decision Latitude (Job Control), and Job Resources We found a moderate quality of evidence of harmful effects of small to moderate sizes for high Job demands (overall) based on six studies ( Table 1). The quality of evidence for the high Quantitative demands (examined in 24 studies), harmful effects, and Job recourses (19 studies) protective effects was low, while the effects ranged between small and large sizes. The quality of evidence for the harmful effects of high Emotional demands (11 studies) was very low and the effect ranging between small and large sizes with a considerable variation across studies. For the Decision latitude (Job control) subfamily, we did not find any statistically significant effects (Table 1). Interactions at Work, Communication, and Leadership As shown in Table 1, the quality of evidence for high social support (21 studies) protective effects and high conflicting/poor communication (five studies) harmful effects was very low, with effect sizes ranging from small to medium but the majority of studies showed small sizes. We also found a very low quality of evidence for high social hindrance (11 studies) with harmful effects of sizes ranging from small to large, but the majority of studies showed small sizes. For the leadership subfamily, we did not find any statistical significance effects. Job Demand, Decision Latitude (Job Control), and Job Resources We found a moderate quality of evidence of harmful effects of small to moderate sizes for high Job demands (overall) based on six studies ( Table 1). The quality of evidence Personality Traits, Coping, Self-Evaluation, Job Attitudes, and Personal Events The personality traits and self-evaluation subfamilies did not show any significant effects (Table 1). However, we found a moderate quality of evidence for high adaptive coping (six studies) protective effects of small effect sizes, high leisure such as relaxation, social activity, physical exercise, and relaxation (five studies) protective effects of sizes ranging from small to medium, and high negative job attitude (nine studies), harmful effects of sizes ranging from small to medium. The quality of evidence was very low for high positive job attitude (eight studies) protective effects of small sizes, and high self-esteem protective effects of sizes ranging between small to large. Work-Family Interface and Perceived Intermediate Work Consequences In the Work-family interface family, there is only low quality of evidence for the work-family conflict (13 studies) harmful effect of sizes ranging from small to medium. We found a low quality of evidence for high stress from work conditions (ten studies) harmful effects of sizes ranging from small to large. Results per Individual Predictor Focusing on individual predictors (before grouping them into subfamilies), we found only six out of 261 predictors had a statistically significant effect of large size (Cohen's f2 raging between 0.39 and 0.69) on occupational burnout rate (Table S1). Three of them had a low risk of bias, including effort-reward imbalance and work and time demands (having a harmful effect) and core self-evaluation, having a protective effect. The other three predictors were of a moderate risk of bias, with workload and class disruption having a harmful effect and increased emotional competencies having a protective effect. Main Findings Performing this systematic review of 85 studies and 261 predictors led us to conclude that the evidence for any previously established risk or protective factor does not reach a high level. We found a moderate quality of evidence for only four subfamilies of predictors, namely the harmful effects of job demands (overall) and negative job attitudes, as well as for the protective effects of adaptive coping and leisure. Low quality of evidence was found for the harmful effects of quantitative demands, Work-family conflict, and stress from work conditions. The grouping of the predictors was performed depending on the theory or framework behind the predictors. However, for some predictors, namely "Satisfaction" and "Stress" from work conditions, we encountered some disagreements. Some authors considered them as situational predictors (related to work conditions), while for others they represented a consequence of work conditions and therefore an intermediate/moderate effect on the pathway between the exposures and occupational burnout. Nevertheless, it is noteworthy that these predictors were measured using different instruments than the ones applied for predictors in the "Situational factors" family. Accordingly, we decided to group them as an independent family entitled "Perceived intermediate work consequences". The Job Demand Control model (JD-C model) is among the most studied models for occupational burnout [72] and our results indicated a moderate quality of evidence for job demands as a harmful effect of large size. Otto et al. [73] suggested increasing the job control of employees and reducing job demands. Nevertheless, Konze et al. raised the question that job control could be a double-edged sword [58], and by taking a closer look at skill discretion and autonomy, we observed that for these two predictors the direction of effects varied across studies, with small effect sizes, no significant results, and with a very low quality of evidence and no significant results. Apparently, these predictors require further investigation with representative samples and multiple wave studies to assess their effects on occupational burnout. Increasing job resources can serve as a protective factor [73], as shown by this review. Social support had a protective effect also supported by work-related stress literature [74]. Social hindrance had a harmful effect in line with the finding of Schilpzand et al., which suggested that hindrance affects the employees' well-being [75]. There is an assumption that communication (i.e., the quality and effectiveness of communications between workers) can be an important predictor of occupational burnout [76], specifically communication climate and communication satisfaction, and the results of this review showed that conflicting/poor communication has an important harmful effect on occupational burnout. Coping strategies and self-efficacy could prevent occupational burnout onset, and previous systematic reviews [77,78] also supported this. However, we found that adaptive coping is particularly protective against occupational burnout. Alarcon et al., 2011 performed a meta-analysis and studied the association between job attitudes and burnout [23], and showed that adaptive organizational attitudes (such as organizational commitment) were associated with occupational burnout, which is consistent with our results, although their review included cross-sectional studies. A systematic review suggested that physical activity could reduce occupational burnout [79], which is supported by our results. However, we found a moderate quality of evidence for all the leisure subfamily (including physical activity). Among predictors belonging to the work-family interface subfamily, occupational burnout was found to be associated only with high work-family conflict, the most studied predictor in this subfamily. A meta-analysis by Amstad et al. concluded that work interface with family and family interface with work are both related to occupational burnout [80]. While our conclusion is based on longitudinal studies exclusively, Amstad et al. also considered cross-sectional studies, which can explain the observed inconsistency between the results. Work stress was positively related to occupational burnout in this review and reinforces the concept that occupational burnout is a response to excessive stress at work [81]. Strengths and Limitations This systematic review study has several strengths. One is the focus on exhaustion as an outcome as it is the main component of occupational burnout [10,[81][82][83]. Other strengths are the inclusion of only longitudinal studies but with a different duration of follow-up and with various occupations (e.g., healthcare employees, teachers, police officers, civil servants, etc.). Since cross-sectional studies do not consider temporality [84], and therefore are inconvenient for causal inference [85], we included only longitudinal studies. This ensured that the exposure preceded the occupational burnout onset for at least 87% of the included studies. Only 13% of the included studies did not report whether the association between the predictors and exhaustion was temporal, but this was taken into account when assessing the risk of bias of the studies. Based on our results concerning the latency of occupational burnout, we recommend that future research considers a longitudinal design with multiple waves [86] with at least one-year follow-up of exposed workers. Finally, we managed to review occupational burnout predictors, considering their type, effect size, and role (protective versus harmful), and the overall evidence of their importance. For the quantitative synthesis, each assessment was performed independently of the other in order to avoid biased conclusions. As the vote-counting method accounting for the significance of the results is criticized, to control bias, we used the vote-counting based on the direction [87]. Moreover, we complemented the quantitative synthesis with a comprehensive risk of bias assessment and the grading of the overall quality of evidence according to PRISMA guidelines and the most validated and appropriate tools (MEVORECH and GRADE). However, we should also consider limitations when interpreting the results of this review. Out of the 85 included studies, 34 (40%) did not control for confounding factors. The sampling method did not ensure obtain a representative sample in the majority (84%) of the studies. The included studies used a longitudinal design, but 11% did not include the same sample in all the waves. As most included studies were conducted and published before the harmonized occupational burnout definition was released, the occupational burnout measurements, even for exhaustion, were highly heterogeneous. The literature search was not extended to the gray literature for three main reasons: there is no consensus on a standardized method for conducting these searches, the full-text studies may be unavailable after the initial search has taken place, and the gray literature is not published in peer-reviewed journals, which is a fundamental indicator of quality [88]. Due to a large number of references screened and reviewed, on the one hand, and the multiple methodological approaches implemented in this review on the other hand, several studies were published during or after compiling this review. When checking databases for new publications up to November 2020, thirteen eligible studies were identified, and four new predictors in addition to the 261 predictors that we reviewed [89][90][91][92]. Due to time and resource constraints, these studies were not reviewed. However, their results were assessed, and we believe that their inclusion would not change the results and conclusions of the present review. Study Implications and Further Perspectives Predictors with protective effects, e.g., job resources, could act as a buffer for the harmful effects of other predictors, e.g., job demands [93]; this means that increasing some predictors with protective effects, such as social support, could reduce mental health problems among workers even with high "Job demands" [28,94]. Hence, decreasing harmful factors may not necessarily increase protection, and it may not be sufficient to reduce predictors with harmful effects without increasing predictors with protective effects [95]. A recent systematic review of preventive interventions with work-focused components showed that implementing these interventions has economic benefits for employers and society through reducing sick leave duration and accelerating recovery from mental health conditions such as depression or improving supervisors' communication with employees suffering from mental health problems [96]. Nevertheless, preventive interventions can also take into account personal-focused components along with the workfocused ones. Therefore, combined interventions are more beneficial [97,98]. Occupational burnout results in low self-esteem, feelings of guilt, dissatisfaction with the work, reduction in the quality of work, absenteeism, intention to quit the job, turnover, family problems, work-home conflict, and reduction in the quality of life [99,100]. Thus, it is beneficial to implement and evaluate strategies targeting the including protective factors (i.e., predictors with protective effects) and reducing risk factors (i.e., predictors with harmful effects). The need to improve the methodological quality of future studies addressing occupational burnout etiology is an important research avenue. All the included studies used self-assessment instruments for both exposures (predictors) and outcome (occupational burnout), and this can produce a common method bias [101]. Using more objective heteroevaluation methods along with the most validated PROMs for occupational burnout [11] is a priority for this area of research. Future research should address all the above-mentioned methodological issues and focus on longitudinal studies with multiple waves of at least one year. Unanswered questions and inconsistencies between results, e.g., age and sex effect [102], should also be addressed. Before concluding, it is noteworthy that the Maslach Burnout Inventory (MBI), by far the most widely used measure of occupational burnout, is largely "preset" to correlate with job-related factors. Indeed, many MBI items involve causal attributions to work (e.g., "I feel burned out from my work"; "I feel frustrated by my job"; see Maslach et al., 2016 [103]). Because many MBI items relate burnout symptoms to work-related determinants in their very content, MBI-based research on the links between burnout and job-related factors is at risk of producing self-fulfilling predictions. It is worth bearing this in mind when interpreting our findings as well as previous findings pertaining to burnout and its jobrelated predictors. Conclusions Preventive interventions for occupational burnout might benefit from intervening on the established predictors regarding the promotion of adaptive coping and leisure and reducing job demands and negative job attitudes. More research on the other predictors using high methodological standards is necessary to increase the scientific evidence regarding burnout etiology and prevention. Supplementary Materials: The following are available online at https://www.mdpi.com/article/10.3 390/ijerph18179188/s1, Figure S1: The full literature search strategy; Table S1: Description of the included studies in the systematic review; Table S2: Description of the grouping of predictors into (sub)families with the theory behind; Table S3: The plots of z-scores per predictor.
2021-09-11T06:17:04.794Z
2021-08-31T00:00:00.000
{ "year": 2021, "sha1": "6c4128a3875fadeed9c32a68993bd390d8b4f2bf", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/18/17/9188/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9859a7dc09f39a57a5763a44a5a4d02a0e7fae14", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
232328924
pes2o/s2orc
v3-fos-license
Management of parotid fistula and Frey’s syndrome with Botulinum neurotoxin type A The common cause of parotid fistula is parotid gland surgery and is frequently due to injury to the gland rather than to the duct. The frequency of postparotidectomy fistula is 14%. Other causes include facial trauma, congenital anomalies of the parotid gland, malignancies originating from the parotid gland and infections. Although there are several options for the treatment of parotid fistula and Frey’s syndrome, very few treatment options are deemed optimal. The use of Botulinum A neurotoxin as a conservative method of treatment for parotid fistula and Frey’s syndrome is a recent and evolving concept. INTRODUCTION A parotid fistula is a rare complication and difficult to treat. Parotid fistula is an epithelialized tract between the parotid gland or its duct and the skin, which is manifested by salivary discharge from a wound site. The common cause of parotid fistula is parotid gland surgery and is frequently due to injury to the gland rather than to the duct, and the frequency of postparotidectomy fistula is 14%. [1] Other causes are facial trauma, congenital anomalies of the parotid gland, malignancies originating from the parotid gland and infections. [2] Frey's syndrome, also called gustatory sweating, which was first described by Lucie Frey in 1923, [3] is a complication of parotid surgery and the occurrence rate varies from 4% to 62%. [4][5][6][7] Frey's syndrome is characterized by sweating, flushing, burning, itching, or neuralgic pain in the preauricular region in response to mastication and salivation. The pathophysiological mechanism is due to an aberrant reinnervation of the postganglionic parasympathetic nerve fibers to the denervated sweat glands and cutaneous blood vessels. Consequently, when acetylcholine is liberated from the parasympathetic nerve endings in response to mastication and salivation, it induces sweating and flushing which was initially a sympathetic response. [8] Currently, there are several options for the treatment of parotid fistula and Frey's syndrome, but there are very few optimum treatment options. The use of Botulinum A neurotoxin (BTA) as a conservative method of treatment for parotid fistula and Frey's syndrome [9,10] is a recent and evolving concept. In this article, we report a case of simultaneous parotid fistula and Frey's syndrome, which developed after superficial parotidectomy and was successfully treated with BTA injection. CASE REPORT A 32-year-old male patient presented with the complaints of watery discharge from a wound over the left parotid Management of parotid fistula and Frey's syndrome with Botulinum neurotoxin type A region for the past 8 months. The fistula had started 2 weeks after he had undergone superficial parotidectomy for pleomorphic adenoma. The patient was diagnosed as a case of postparotidectomy parotid fistula and initially treated conservatively with regular pressure dressing, anticholinergics, and antibiotics, but symptoms were not relieved. Subsequently, the patient had also developed features of Frey's syndrome 7 months after the surgery. The patient was diagnosed with simultaneous parotid fistula and Frey's syndrome [ Figure 1]. Forty units of BTA was injected subcutaneously in the parotid region. There were no complications such as facial nerve injury and facial artery or masseter muscle trauma. The parotid fistula got resolved within 4 days with complete closure of the fistulous opening and gustatory sweating (Frey's syndrome) ceased after 6 days [ Figure 2]. There was no recurrence later, after a follow-up period of 1 year. DISCUSSION The treatment options for parotid fistula can be listed as surgical and conservative. Surgeries such as duct ligation, sectioning of the auriculotemporal (Jacobsen's nerve), delayed primary repair of duct, reconstruction of the duct with a vein graft or mucosal flaps, excision of fistulous tract, and total parotidectomy have been mentioned in literature. Conservative treatment options including anticholinergic therapy, radiotherapy, stopping oral intake, and insertion of drains have been reported. [1] Currently, there is no established treatment option for parotid fistula. Although the use of BTA as a subcutaneous injection is a novel technique, very little data have been mentioned in literature. Frey's syndrome can be corrected surgically by using temporalis fascia, sternocleidomastoid flap, superficial musculoaponeurotic system flap, and biomaterial or autologous implants as interpositioning grafts. Over the past few years, the medical treatment of Frey's syndrome has been done with topical antiperspirant and injection with alcohol, scopolamine, and glycopyrrolate. However, presently, BTA is the most widely used substance for the treatment of Frey's syndrome, and BTA treatment is comparatively well established as it has shown to improve gustatory symptoms drastically, thereby improving the quality of life. [8] BTA is a potent neurotoxin produced by a Gram-positive anaerobic bacteria Clostridium botulinum and was first described in 1895. [9] The mechanism by which BTA acts on the peripheral cholinergic nerve endings is by its inhibitory action on the calcium-mediated release of acetylcholine vesicles at the presynaptic neuromuscular junction. [10,11] There are seven serotypes of Botulinum neurotoxin designated A through G, each with its own immunogenic specificity. BTA is the most potent of all and a widely used serotype, which is commercially available for medical use and is FDA approved. In 1978, Botulinum toxin (BTX) was used to treat strabismus. [12] Since then, there have been several indications of BTA for a wide spectrum of medical conditions. The treatment of parotid fistula by BTA was first reported in 2001. [13] As the secretomotor nerve fibers of the salivary gland are mostly cholinergic, autonomic, and parasympathetic, when BTA gets transported to the nervous tissue, it blocks the neurotransmitter release at the cholinergic nerve endings and thus decrease salivary production and as a result, the fistula tract closes. BTX injections are usually effective in several patients with Frey's syndrome. [14] Mainly, BTX-A is used for salivary gland [15][16][17] and gustatory sweating, although the use of BTX-B [18] and BTX-F [19] has been described in salivary gland diseases. To treat sialocele and salivary fistulae, the dose of BTX-A injected in the parotid gland, varied in several studies, ranging from 10 to 60 mouse units. [20] Laskawi et al. reported a total dose of BTX-A between 10 and 40 U, depending on the size of the remaining glandular compartment for postparotidectomy fistulae. [16] The technique of BTA injection can be done as an outpatient procedure under topical anesthesia. BTA 100 units is diluted with 2.5 ml of normal saline. Hence, each 0.1 ml is equal to 4 units. The parotid region is divided into four quadrants (anterior, posterior, upper, and lower). For parotid fistula and Frey's syndrome, the injection is done with 1cc BD syringe and 26G needle in the subcutaneous plane in the affected region. A total of 40 units of BTA is injected in the subcutaneous plane of the parotid region. [20] The remnant 60 units of BTA can be stored in refrigerator for 7 days. If there are residual symptoms of gustatory sweating or fistula, BTA injection can be repeated within 7 days. The complications of BTA injection are abscess formation at the injection site, nausea, vomiting, dry mouth, respiratory muscle weakness and headache. The lethal dose in humans is around 3000 U. The therapeutic dose ranges from 25 to 300 U. The peak neuromuscular blockade effect of the toxin occurs within 24-72 h after exposure and persists for 4-6 months. The secretomotor fibers of glands are blocked for 10-12 months. The treatment of parotid fistula or Frey's syndrome is very difficult and frustrating and at present, most of the treatment options presently available are suboptimal and ineffective. BTA injection is very safe and appears to be a very effective technique for patients suffering from parotid fistula and Frey's syndrome. Declaration of patient consent The authors certify that they have obtained all appropriate patient consent forms. In the form the patient(s) has/have given his/her/their consent for his/her/their images and other clinical information to be reported in the journal. The patients understand that their names and initials will not be published and due efforts will be made to conceal their identity, but anonymity cannot be guaranteed. Financial support and sponsorship Nil.
2021-03-24T13:49:01.021Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "73da182790fc1b3b5903bc362d041e882dbabcd5", "oa_license": "CCBYNCSA", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8191560", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "952b5a09b7f49e0da3c62c9035145a10b57cae5e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119592287
pes2o/s2orc
v3-fos-license
Integral and series representations of the digamma and polygamma functions We obtain a variety of series and integral representations of the digamma function $\psi(a)$. These in turn provide representations of the evaluations $\psi(p/q)$ at rational argument and for the polygamma function $\psi^{(j)}$. The approach is through a limit definition of the zeroth Stieltjes constant $\gamma_0(a)=-\psi(a)$. Several other results are obtained, including product representations for $\exp[\gamma_0(a)]$ and for the Gamma function $\Gamma(a)$. In addition, we present series representations in terms of trigonometric integrals Ci and Si for $\psi(a)$ and the Euler constant $\gamma=-\psi(1)$. In this paper, we obtain various representations of the digamma function via the connection γ 0 (a) = −ψ(a) [22]. These in turn lead to many special cases, including the values ψ(p/q) for rational argument, and further imply representations of the polygamma functions. We obtain product representations for exp[γ 0 (a)] and Γ(a). We present series representations for ln Γ(a), ψ(a), and γ using the trigonometric integrals Si and Ci. In addition, we provide several summations over parameterized values of Ci. The following is an example of representations that we develop. Proposition 1 and this Corollary subsume expressions for γ given in [18]. In addition, from Proposition 1 follow representations for the polygamma functions, and these include Corollary 2. We have for Re a > 0 Hence we obtain representations at positive integer arguments for harmonic numbers H n ≡ n k=1 1/k and generalized harmonic numbers H (r) n ≡ n k=1 1/k r . For we have H n = ψ(n + 1) − ψ(1) = ψ(n + 1) + γ and By inspection, we see that the right sides of (1.12)-(1.14) properly vanish at a = 1. Then we may determine the asymptotic dependence to all orders of a certain second moment of the Riemann xi function. For this, we put ξ(s) = (s − 1)π −s/2 Γ(1 + s/2)ζ(s), and Ξ(t) ≡ ξ(1/2 + it). We have The moment integral here, going back to Ramanujan, is of interest from many points of view [4]. Let Then we have Proposition 2. We have (1.20) In this expression, Ci(π) ≃ 0.07366079 and therefore the sum terms provide small corrections. Alternatively, this result follows from the Euler-Maclaurin summation expression where P 1 is the polynomial given in (2.29). Similarly, for r > 1 we have Let Ci(βn) Ci(βn) z n ln n n(n + 1) (1.27) Proof of Propositions We let ψ ′ be the trigamma function and (b) n = Γ(b + n)/Γ(b) be the Pochhammer symbol. Proposition 1. Preliminary relations are contained in The rest of the Lemma follows easily upon noting (e.g., p. 259 of [1]) We now write from the Lemma The integrand of (2.2) being absolutely convergent, the interchange of summation and integration is justified. In order to achieve hypergeometric form, we note the ratios Upon using the series definition of the function 3 F 2 , we therefore obtain Here, we have used the transformation [18] (6), valid for Re s > 0 and Re v − t > 0, We have obtained (1.3). Next we have The integral in (2.6) becomes (2.9) Therefore, from (2.6) we find dt. (2.10) We can carry out the integration by using the partial fraction decomposition [13] N! If we employ the Beta function integral in (2.10) we find Performing the integral over t gives (1.5). Putting x = X and y = (1−Y )/X, with Jacobian of transformation ∂(X, where we employed the Beta function integral of (2.12). Then we have the expressions that are equivalent to (1.6c). We note that the reciprocity relation itself provides the complementary asymptotic relation as α → 0. For then β = 1/α → ∞. We also note that the leading term of the asymptotic relation in Corollary 6 is connected with the skew self-reciprocal inverse Fourier cosine transform This transform may be calculated by logarithmic differentiation with respect to x of the integral 2 π ∞ 0 α x cos(αt)dα with −1 < Re x < 0, at x = −1/2. Remarks. If in (1.5) we put u = exp(−t), we have If in (2.15b) we instead carry out the integration over Y , we recover the integral of (2.1). where 0 < p < q; ′ means that when q is even the term with index n = q/2 is divided by 2. Therefore, we have found representations for all the values of (2.21). The digamma and polygamma functions satisfy many properties including functional equations, duplication and multiplication formulas, and reflection formulas. All such properties must be inherent in our various series and integral representations. As an illustration, we have The Corollary then follows from the factorization (2.24) Proposition 2. We have previously obtained the integral representation [7] (2.86) The idea of the proof is to suitably expand the logarithm of the integrand, and then to perform termwise integration. For this we write Shifting the index in the summations gives the expression (1.20). Remarks. The asymptotic forms of Si and Ci are easily obtained to any order by repeated integration by parts. It is then easy to see that the summations in (1.20) have summands that are O(1/j 2 ), and additionally these leading terms have sign alternation according to (−1) j . For comparison purposes, we recall an earlier result [9] 1 Ci(2πj), (2.27) that readily shows how to develop 1 − γ from 1/2 with a series of corrections, and the leading terms in the corrections are easily written. We recall that the constant (e.g., [19], p. 345) the first periodized Bernoulli polynomial, has the standard Fourier series [1] (p. 805), where we integrated by parts and made a simple change of variable. As a second proof of (2.27) we have the following. We write Here the interchange of summation and integration is justfied by the absolute convergence of the x-integration. In the last step, we applied Hermite's expression for the digamma function ( [19], p. 91 or [12], 8.361.3, p. 943). Further representations for combinations of the Euler constant and zeta values may be obtained from the following. Lemma 2. We have for k ≥ 2 We provide an operational proof, using the Dirac delta function δ. We have, integrating by parts, Of course I k → 0 as k → ∞, and we have the simple bound I k ≤ 1/k. We let B k (x) be the Bernoulli polynomials and P k (x) = B k (x−[x]) their periodized form. In applying Lemma 2 we may use the relation with B 0 (x) = 1, and the Fourier series for n ≥ 1 [1] (p. 805) (2.35) Then we have Corollary 9. We have (a) and (b) Proof. For part (a) we use the combination (2.38) Proposition 3. From (e.g., [11], p. 107) (2.40) Using the Fourier representation (2.29) and performing the integration gives the Proposition. Summary We have developed a variety of series and integral representations for a family of
2010-08-24T15:19:15.000Z
2010-07-31T00:00:00.000
{ "year": 2012, "sha1": "71e335a9299d03fa5c51587fc47d47c054ae8ddc", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1008.0040", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "152caaa6e7ca3a5efd118d3a12d342573a097f16", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
253041650
pes2o/s2orc
v3-fos-license
Bullet impacts in building stone excavate approximately conical craters, with dimensions that are controlled by target material Bullet impacts are a ubiquitous form of damage to the built environment resulting from armed conflicts. Bullet impacts into stone buildings result in surficial cratering, fracturing, and changes to material properties, such as permeability and surface hardness. Controlled experiments into two different sedimentary stones were conducted to characterise surface damage and to investigate the relationship between the impact energy (a function of engagement distance) and crater volumes. Simplified geometries of crater volume using only depth and diameter measurements showed that the volume of a simple cone provides the best approximation (within 5%) to crater volume measured from photogrammetry models. This result suggests a quick and efficient method of estimating crater volumes during field assessments of damage. Impact energy has little consistent effect on crater volume over the engagement distances studied (100–400 m), but different target materials result in an order of magnitude variation in measured crater volumes. Bullet impacts in the experiments are similar in appearance to damage caused by hypervelocity experiments, but crater excavation is driven by momentum transfer to the target rather than a hemispherical shock wave. Therefore in contrast to predictions of impact scaling relationships for hypervelocity experiments, target material plays the dominant role in controlling damage, not projectile energy. Bullet impacts are a ubiquitous form of damage to the built environment resulting from armed conflicts. Bullet impacts into stone buildings result in surficial cratering, fracturing, and changes to material properties, such as permeability and surface hardness. Controlled experiments into two different sedimentary stones were conducted to characterise surface damage and to investigate the relationship between the impact energy (a function of engagement distance) and crater volumes. Simplified geometries of crater volume using only depth and diameter measurements showed that the volume of a simple cone provides the best approximation (within 5%) to crater volume measured from photogrammetry models. This result suggests a quick and efficient method of estimating crater volumes during field assessments of damage. Impact energy has little consistent effect on crater volume over the engagement distances studied (100-400 m), but different target materials result in an order of magnitude variation in measured crater volumes. Bullet impacts in the experiments are similar in appearance to damage caused by hypervelocity experiments, but crater excavation is driven by momentum transfer to the target rather than a hemispherical shock wave. Therefore in contrast to predictions of impact scaling relationships for hypervelocity experiments, target material plays the dominant role in controlling damage, not projectile energy. Contemporary conflicts cause devastating damage to the built environment through the use of aerial bombings, artillery strikes, and ground based weapons. In addition to the large scale destruction imposed by explosives and artillery, smaller scale damage results from bullet and shrapnel impacts. This scale of damage is often overlooked during initial post-conflict surveys of damaged heritage, despite being common to nearly all current and historical conflicts since the use of early firearms. Many buildings damaged this way are considered to be culturally significant heritage sites, such as religious buildings across Ukraine damaged by artillery and shrapnel during the current conflict 1 , or the targeted demolition and looting of Palmyra in Syria 2 . There is an emerging understanding that for stone buildings, these regularly overlooked forms of damage are associated with more than just surficial cratering [3][4][5][6][7][8] . Fracture networks can extend deep within the stone, creating 4-7 times more new surface area than the impact crater alone 8 . Grain fracturing and pore space collapse directly below the impact lead to compaction, locally reducing permeability and surface hardness. This volume is surrounded by a region of greater surface hardness reduction and increased permeability 7 . Internal fracture intensity decreases with distance away from the crater floor, which, together with the surface hardness and permeability changes, affects the stone's resistance to further deterioration from weathering processes 8,9 . A higher effective porosity, i.e. the combination of inherent porosity and impact induced fractures, facilitates greater ingress of moisture via capillary flow 10 . This moisture can dissolve matrix and constituent minerals, reducing overall stone strength and further increasing its effective porosity. Moisture transports dissolved salts into the stonework, which create outward pressures upon crystallisation, widening pore spaces and fractures. This results in the loss of material from the surface of the stone, reduced stone strength, and an exacerbated negative feedback loop of stone deterioration [10][11][12][13] . It is thus vital for effective conservation efforts that the surface and subsurface expressions of impact damage are comprehensively understood. This study characterises impact damage under controlled conditions for different target materials and projectiles in order to investigate potential relationships with resultant damage. 14 . For heritage affected by armed conflict, the capture of adequate digital imagery for representative 3D models may not be possible in all situations, so alternative methods must be used. Campbell et al. 15 compared crater profiles measured manually using a Barton comb with profiles extracted from a 3D model. This study investigates a simpler approach: can crater volumes be estimated using just depth and diameter measurements and simplified volume geometries? A simple approach for estimating crater volumes is invaluable for surveys of heritage damage in conflict zones, where factors such as safety or accessibility can limit effective time on site. Comparing crater volumes to the kinetic energy of the impactor allows important deductions to be made about the physics of the cratering mechanism. In the latter part of the paper, accurate crater volume estimates from photogrammetry are used to compare the damage and scaling relationships of bullet impacts with those of hypervelocity experiments. The comparison yields insights into cratering mechanics. Methods and materials Target materials and projectile impacts. Freshly quarried cubes ( 15 x 15x 15 cm ) of Stoneraise Red Sandstone (SRS) and Cotswold Hill Cream Limestone (CHCL) were selected as target stones because of their analogous properties to heritage stones in the Middle East, such as the Mokattam Limestone of Egypt, and the Umm Ishrin sandstones of Petra, Jordan [16][17][18] . The Cotswold Hill Cream Limestone is an oolitic grainstone from the Middle Jurassic Inferior Oolite (quarried near Ford, UK). The average grain size is 0.5 mm and it has a porosity of ∼20% (see Fig. 1a). The Stoneraise Red Sandstone has a fine-medium ( 0.125 − 0.5 mm ) grain size, and comes from a quartz rich bed from the Permian New Red Sandstones (quarried near Penrith, U.K.) (see Fig. 1b). It has a porosity of ∼11% and generally no internal layering, though some blocks exhibit visible beds of coarser grains ( ∼ 1 mm ). The density of each sample was determined by measuring the dry mass of the block and dividing by the volume ( 3375 cm 3 for all samples). Controlled firearm experiments were carried out at Cranfield Ordnance Test and Evaluation Centre (Gore Cross, UK) to simulate conflict damage to stone. Two different types of ammunition used in contemporary and past conflicts were fired at 90 • to the target face. Firstly, 5.56 x 45 mm NATO (abbreviated as NATO) is a standardised cartridge used in the British SA80 assault rifle, the American M16 family of assault rifles, and many other military issue firearms around the world. The second ammunition type is a 7.62 x 39 mm cartridge (abbreviated as AK-47), commonly fired from AK-variant rifles, such as the widely known AK-47. Both ammunition types are a spitzer ogive nosed projectile with a brass jacket and lead core (see Fig. 1c,d), but the NATO projectile also has a steel tip within the brass jacket. The AK-47 projectile has a mass of 7.95 grams (123 grains) and a bulk density of 13.25 gcm −3 . The NATO projectile has a mass of 4.04 grams (63 grains) and a bulk density of 8.08 gcm −3 . The bulk density of each projectile was calculated by dividing the projectile mass by the volume of water displaced by the projectile in a graduated cylinder. Both cartridges were remotely fired from mounted proof barrels 14 m from the target. Projectile velocity was measured using a Weibel SL-525P Doppler radar system ( 400 mW , 10.525 GHz ). The kinetic energy ( E k ) of the projectile at the point of impact was calculated using: where m is the projectile mass and v i is projectile velocity at the point of impact. Test shots were conducted on an open range at standard propellant load to measure the velocity decay of each projectile, providing desired velocities for simulated engagement distances. Propellant loads for each cartridge were adjusted to reduce velocities to simulate impacts at distances of 200 m in limestone and sandstone targets. Further experiments at a simulated distance of 400 m were conducted in limestone targets to acquire a set of damaged blocks for a different study, but whose crater geometry is beneficial to include here. One further shot was conducted at full propellant load (muzzle velocity) into a sandstone target. Average engagement distances (i.e. the distance between combatants) of urban firefights during the Iraq War ranged from 26 m to over 126 m , and most soldiers are trained for engagement distances of 0-600 m , so 200 m represents a reasonable distance for simulating impacts in both urban and open scenarios 19,20 . Concrete blocks were placed on all faces, except the target face, for confinement. Target blocks with bedding were oriented so that foliations were parallel to the target face (i.e. perpendicular to trajectory). Natural stone is typically strongest when loaded perpendicular to bedding, so target blocks were oriented with a consistent bedding orientation relative to the target face. Target properties. To investigate the influence of target strength on impact damage, compression tests were conducted on undamaged blocks of each stone type to measure the uniaxial compressive strength (UCS) 21 and the indirect tensile strength 22 . Cylindrical cores ( 20 mm diameter x 40 mm length) were drilled perpendicular and parallel to bedding. Cores were loaded at a constant rate of 0.005 mms −1 using a Zwick/Roell Z050 static testing machine. The standard force, deformation, and time step were recorded using the TestXpert III software (version 1.5). Linear regression was carried out on straight sections of the stress-strain curves to find the axial Young's modulus parallel and perpendicular to bedding for each stone type. The UCS ( σ u ) was calculated using the equation: where P is the failure load and A c is the cross sectional area of the core. Further cylindrical cores ( 30 mm diameter) for measuring the indirect tensile strength were cut parallel to bedding, and then into 15 mm thick disks for Brazilian tests 22 . The prepared disks were mounted on their thin edge between flat plates and loaded perpendicular to bedding at a constant rate of 0.005 mms −1 . The indirect tensile strength ( σ t ) was then calculated by: where P is the failure load, t is the thickness of the disk and D d is the disk diameter. The ultrapulse velocity (UPV) was measured in twelve undamaged blocks of each stone type using a Proceq Pundit 200 with 54 kHz exponential transducers (pulse voltage = 200 V , receiver gain = x1, frequency = 20 Hz ). UPV was measured in each of the three orthogonal directions by placing the transducers on opposite faces. A bulk UPV value was calculated by averaging the three orthogonal directions. Characterising damage morphology. Damaged samples were photographed through a 360 • rotation at three overlapping camera positions using a 14-megapixel Fujifilm FinePix S3400 digital camera. Samples were overturned and the process was repeated, resulting in a total of 6 overlapping camera orientations. Additional images were taken across the impact crater to ensure adequate capture of morphology. Meshroom (v2020.1.1), a free and open-source structure from motion (SfM) pipeline developed by AliceVision®, was used to process the ∼300-400 images into a 3D mesh 23,24 . In CloudCompare (version (2.11.3) 25 ), the impact crater was isolated from the full block mesh, scaled, and oriented with the target surface horizontal and an azimuth direction of 000 • directed towards the top edge of the block in its firing position. Crater volumes were measured in CloudCompare and morphology profiles extracted using a Python code (version 3.8.11) from 3D point clouds (code available in 15 ). Impact craters were outlined in QGIS (version 3.16.15) from plan view photographs. The edge of the crater was defined visually as the transition point from a depression, not including radial fractures, to undamaged target face. These outlines were analysed in ImageJ (version 1.53) to measure the crater area (A), which was used to calculate an area equivalent diameter ( D e q ) using: Crater volumes measured from the digital models were compared to the volumes of three simplified geometries (V) derived from just crater depth (d) and radius ( r = 0.5D eq ) measurements. The simplified geometries selected have previously been used to describe crater geometries in hypervelocity experiments: a simple cone [26][27][28] where V = 1 3 πr 2 d , a spherical cap 29 where V = 1 6 πd(3r 2 + d) , and a parabaloid, typically representing the transient crater 27 where V = 1 2 πr 2 d . Results Target properties. Compression tests show that the sandstone targets have higher compressive and tensile strengths than the limestone targets. Reported strengths are the average value of n number of cores measured ± one standard deviation (also available in Supplementary Table S1). The uniaxial compressive strength perpendicular and parallel to bedding for the Stoneraise Red Sandstone (SRS) (n=9) is 40.0 ± 5.9 MPa and 44.0 Surface damage All experiments resulted in the formation of an impact crater and material loss. The floor of the impact craters have a fine grained, powdery appearance with a pale discolouration. Damage varies with lithology and projectile type. Sandstone targets impacted with AK-47 projectiles exhibit shallow, cone-shaped craters with average depths of 4.6 mm, diameters of 33.8 mm , and volumes of 1.9 cm 3 (see Table 1). There are few visible surface fractures surrounding the impact crater and where present they are short and have closed apertures. Some samples have a dark grey discolouration in and around the impact crater from lead within the projectile (see Fig. 2a). Limestone targets have a more complex, two-part structure of a deep central depression surrounded by a shallow dipping spall region (see Fig. 2b) Fractures with open apertures radiate from the impact crater, and can reach the edge of the target face. Limestone targets have more radial fractures with wider apertures than impacts into sandstone targets. The craters have a two-part structure of steep sided central excavation and shallow dipping spall zone. NATO impacts into limestone targets caused craters with an average depth of 23.3 mm and diameter of 65.1 mm . Crater volumes are over twice as large (24.7 vs. 11.0 cm 3 ) as comparable impacts into sandstone targets. For the studied engagement distances (i.e. simulated distance between firearm and target), the impact energy does not appear to have a strong influence on crater volume. For near identical impact energy, there can be up to an order of magnitude difference in crater volume (see Fig. 3). Of the studied simplified crater geometries, the simple cone provides the closest estimate to the volume of the crater measured by photogrammetry, with sandstone craters underestimated 4.9% ± 12.0 on average and limestone craters slightly overestimated by 1.4% ± 18.2. These values are substantially smaller than the overestimation for sandstone and limestone craters by the spherical cap (52.8% ± 23.2 and 80.2% ± 61.2 respectively) and paraboloid (42.6% ± 17.9 and 52.1% ± 27.4 respectively) geometries (see Fig. 4a). The simple cone geometry was also applied to asymmetric craters created by oblique impacts 15 . The geometry estimates crater volumes within 6.3% of the photogrammetry values, almost as accurate as for the perpendicular impacts (4.9%). Discussion For both the simple cone-shaped crater and the more complex two-part structures, radial fractures centred on the impact crater, and crushed target material on the crater floor, resemble damage resulting from hypervelocity experiments 28,30,31 . In this study, relatively undeformed projectile material (steel tip of NATO projectile) is embedded in the floor of the crater, unlike most hypervelocity experiments in which the projectile is melted and/ or ejected 32,33 . The embedded projectile material here lies at the base of short, cylindrical penetration channels, akin to observations made from experiments investigating the penetration of rigid steel rods into concrete 34 . www.nature.com/scientificreports/ Corrosion of the projectile's steel tip when exposed to the elements over some time after impact may locally exacerbate fractures, similar to the deterioration seen in reinforced concrete due to corrosion of rebar, except on a much smaller scale 35 . There is no evidence of any AK-47 projectiles penetrating into targets, only smearing of lead material around or in the impact crater. The simple cone geometry provides the best estimation (within 5%) of the measured crater volume using depth and diameter measurements. The spherical cap and paraboloid geometries substantially overestimate the measured crater volume. This overestimation stems from the morphological differences of the geometries, visualised in Fig. 5. The concave down form of crater walls, created by the two part structure of a deep central pit and surrounding spall zone profiles, diverges from the simplified geometries (cone, spherical cap, paraboloid) which have a straight or concave up form to wall profiles. This effect is more prominent in the spherical cap and paraboloid geometries, which is reflected in overestimation of 50-80%. Additional geometry measurements, such as the width and depth of the central excavation or spall zone, may provide better estimates of crater volume, but the extra time and effort required in measuring these values would compromise the goal of a quick and efficient field method. Simplifying crater geometries to estimate volume from two rapidly acquired measurements allows many impacts to be studied in a shorter time than photogrammetry. Measurements of depth and diameter are possible with simple analogue tools such as calipers and depth gauges. Although this study took a digital approach to these measurements, it is unlikely the substitution with analogue values will affect the overall conclusions, as Campbell et al. 15 show reasonable agreement between analogue crater profiles obtained using a Barton comb and profiles measured from photogrammetry models. Volumes can be estimated in the field with the simplified www.nature.com/scientificreports/ geometry, providing an overview of crater volume distribution while investigators are on site, supporting firstresponse assessments of conflict damage to heritage. Imaging of a site for photogrammetry models can be done relatively quickly (minutes per impact), but the post-field production and analysis of models (hours to tens of hours) lengthens the overall method time. Smartphone cameras, and the Light Detection and Ranging (LiDAR) capability of new generation iPhones or hand-held scanners, are increasingly able to generate 3D SfM models approaching the precision of those using digital cameras and SfM software, or those derived from terrestrial laser scanning (TLS) 36,37 . The LiDAR sensors in iPhones were developed to enhance photographs, and not to produce surface coordinates like traditional TLS. However, downloadable applications have been developed to utilise the iPhone hardware to produce models that are of comparable precision to SfM and TLS methodologies 36 . At present, the measurement of crater volumes and fracture orientations from 3D models in the field is still limited by the need for computers with appropriate software. Analogue field measurements remain the simplest and most accessible means of initial damage assessment. Photogrammetry and simplified volume estimations could be viewed as complimentary methods. Volume estimation from depth and diameter measurements provides a good first order method of quantifying impact damage and its distribution, enabling on site testing of hypotheses and targeted data collection towards areas at highest risk of future deterioration. If the situation permits, imaging of the site for SfM photogrammetry models provides a more accurate quantification of the damage, as well as digitally preserving heritage sites in a way that can be used as a baseline to track changes over time 38,39 . The three simplified geometries presented here show an increasing overestimation of crater volume with increasing depth/diameter ratio (see Fig. 4b). This is likely the result of the deeper central pits, causing divergence of crater wall morphology from the straight or concave up profile of the simplified geometries. Therefore care should therefore be taken when estimating the volume of craters with higher depth/diameter ratios. This method has been developed for impact craters with good rotational symmetry (created by perpendicular impacts), however the simple cone geometry does suitably estimate the volume of craters created by oblique impacts (within 6.3%). In hypervelocity experiments, crater volume is linked to the kinetic energy of the projectile (i.e. impact energy). The greater the amount of energy available, the larger the peak pressures experienced by the target, and the greater the material failure [40][41][42] . Hypervelocity experiments exhibit well established correlations between increasing impact energy and crater volume (Fig. 6). Impact energies and crater volumes presented here are of a similar magnitude to some hypervelocity experiments (Fig. 6). However, for the range of impact energies (approximating engagement distances of 100 − 400 m ) of this study, the crater volumes do not follow the relationship with impact energies observed in the MEMIN (Multidisciplinary Experimental and Modelling Impact Research Network) 43 or Moore et al. 44 hypervelocity studies. For a given impact energy, limestone targets from this study have larger crater volumes than hypervelocity experiments, whereas sandstone targets impacted by where V is the crater volume, m, ρ p and v i are the projectile's mass, density and velocity, ρ t is the target density, Y is the measure of target strength, and µ and v are scaling exponents 45 . For strength controlled craters, V increases at a rate somewhere between momentum scaling ( V ∝ mv i ) and energy scaling ( V ∝ mv 2 i ), imposing limits for µ of: 1/3 < µ < 2/3 45,46 . Equation 8 can also be written using three scaling parameters (pi-scaling): cratering efficiency ( π v ), a strength term ( π 3 ), and a density term ( π 4 ): Multiple linear regression of the experiments conducted here failed to produce values for µ and v of any statistical significance and within the limits for µ . The creation of the generalised equation for non-porous materials poses the question of its applicability to the porous targets of this study. However, hypervelocity impact experiments with a range of non-zero sample porosities could be used to calculate values of µ and v 43,44,47 , whilst numerical models found no change in µ for target porosities 0-35% 48 . This suggests that target porosity is not the sole reason for the failure to obtain values of µ and v in this study. The use of pi-scaling assumes that the impact causes a shock wave that is equivalent to an explosion at depth, and assumes a point source 47 . The validity of this assumption may be why hypervelocity impact craters remain relatively circular except at very low impact angles 47 . A condition of the point source assumption is that impact velocity far exceeds the target sound speed 49 . The impact velocities of the experiments reported here are similar or below the UPV (i.e. sound speed) values of the target lithology, so these experiments may not produce a shock wave at impact. Without a shock wave, crater excavation is instead driven by momentum transfer from the projectile to the target, a process influenced by the strength of both the target and projectile materials. Limestone targets Figure 4. (a) Estimated crater volumes normalised to the crater volume measured from photogrammetry models plotted against photogrammetric volume. Sandstone targets (filled markers) have smaller crater volumes than limestone targets (hollow markers). The simple cone geometry (triangle marker) provides the closest estimate to the measured volume (dashed line). (b) Estimated crater volumes normalised to the crater volume measured from photogrammetry models plotted against depth/diameter ratio. There is a statistically significant, though weak, trend of increasing overestimation with increasing depth/diameter ratio (see Supplementary Table S2). www.nature.com/scientificreports/ in this study had compressive and tensile strengths 75-80% and 50% weaker respectively than the sandstone targets, resulting in greater crater volumes than sandstone impacts, even at lower impact energies (see Fig. 3). The strengths of each target lithology were measured under quasi-static strain rates ( <, 10 s −1 ), but rock strength is strain rate dependent, increasing rapidly after a threshold strain rate 50 . Rae et al. 51,52 show that the dynamic compressive strength of rocks can be double the quasi-static strength at stain rates > 10 2 s −1 . Cho et al. 53 www.nature.com/scientificreports/ show that tensile strength increases at strain rates 10 0 -10 1 s −1 . Bullet impacts exhibit strain rates of 10 3 -10 6 s −1 , varying due to quantities such as target and projectile material, impact energy, impact trajectory, and projectile shape [54][55][56] . The target strengths used here are therefore a minimum value. The clear correlation between target strength and crater volume indicates that any increase in strength due to strain rate may be comparable between the two lithologies. The projectile strength in these experiments appears to have an influence on damage, with the harder steel tip of the NATO projectile resulting in larger impact craters than comparable impacts using the lead cored AK-47 projectiles. The steel tip of the NATO projectiles remains relatively undeformed and embedded in the crater floor, likely experiencing a greater interaction time with the target. Barnouin-Jha et al. 's 57 low velocity (85-250 ms −1 ) experiments yielded results incompatible with proposed crater scaling relationships, which was suggested to have been due to increased interaction time between projectile and target. They propose that the penetration time is critical to the cratering process, and that depth/diameter ratios will be larger than expected for impacts at much higher velocities. Kenkmann et al. 43 reported depth/diameter ratios ranging from 0.1 to 0.56 for impact velocities of 2500-7850 ms . Average depth/diameter ratios of experiments here (0.13-0.35) fall within this range of values for much lower impact velocities, so do not initially appear to support Barnouin-Jha et al. 's 57 suggestion. The ogival shape of the projectiles in this study is different from the spherical projectiles used in both the hyper-and low velocity experiments discussed, possibly increasing penetration potential and reducing the direct comparability between the sets of experiments. Target lithology is a bigger determining factor of final crater volume than impact energy, despite the scatter observed here (see Fig. 6) 15 . This could be used in conjunction with knowledge of heritage construction materials to prioritise post-conflict efforts on weaker materials. There is up to an order of magnitude variation amongst the crater volumes measured from photogrammetry models for the same impact energy (see NATO projectile into sandstone targets in Fig. 3). The cause of this variability in impact geometry under very similar impact conditions may be the result of internal variations within target lithologies. Despite target blocks being quarried from the same beds and oriented in the same way with respect to internal foliation, natural sedimentary stone has inherent variability that may result in variable crater volumes for the same conditions. There is similar inherent variability in hypervelocity experiments (e.g. MEMIN 43 and Moore et al. 's 44 data, see Fig. 6), for which scaling relationships could still be derived. Some different form of scaling relationships might exist for the ordnance velocity experiments presented here, which additional experiments at a greater range of impact energies could help to derive. Conclusions Bullet impacts into limestone produce wider, deeper, and more voluminous impact craters than the same projectiles impacting sandstone targets. Limestone targets also have tensile strength 50% lower, and compressive strength values 75% lower than sandstone targets. Sandstone targets impacted with 7.62 x 39 mm (AK-47) projectiles have shallow, cone-shaped craters. Targets impacted with 5.56 x 45 mm NATO projectiles, and impacts of both projectiles into limestone targets, have a two part-structure consisting of steep sided central excavation pit surrounded by a shallow dipping spall zone. Radial fractures are centred around the impact and reach the edge of the target block, providing conduits and entry points for weathering agents such as salt and moisture. The volume of a simple cone, calculated from two simple measurements of crater depth and diameter, estimates crater volume within 5% of the accurate value determined from photogrammetry models. This result allows for a quick and efficient method for initial assessment of heritage sites damaged in armed conflict. Impact craters generated here are similar in size and morphology to craters generated by hypervelocity experiments. However, projectile velocities below the sound speed of the target, penetration of the projectile, and the lack of scaling between crater size and impact energy, imply that damage is not governed by a shock wave. Crater excavation is instead controlled by momentum transfer, strongly influenced by target and projectile properties. Thus over the range of impact energies studied, engagement distance has little consistent effect, but target material typically creates an order of magnitude variation in crater volume. This suggests that heritage sites built of stone with lower strength values are at risk of greater damage from conflict. Data availability All the data used in this study is provided in the supplementary information.
2022-10-21T16:17:26.888Z
2022-10-21T00:00:00.000
{ "year": 2022, "sha1": "389b2e8da3a4ff805ff7570faa4040db7441cd64", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "389b2e8da3a4ff805ff7570faa4040db7441cd64", "s2fieldsofstudy": [ "Engineering", "Geology" ], "extfieldsofstudy": [ "Medicine" ] }
222236303
pes2o/s2orc
v3-fos-license
Multimodality magnetic resonance imaging for the diagnosis of high-flow priapism following a straddle injury Abstract Rationale: Priapism is a common urologic emergency, but high-flow penile priapism (HFP) caused by trauma is very rare. Therefore, HFP diagnosis and treatment are still not standardized. Patient concerns: A 29-year-old man was admitted to the urology department of our hospital on August 01, 2019, due to “persistent penile erection caused by a straddle injury.” Diagnosis: On July 17, 2019, the patient underwent Doppler ultrasonography, which indicated swollen corpus cavernosum. Interventions: The patient took over-the-counter anti-inflammatory drugs but the erectile state of the penis remained unchanged. A second perineal injury resulted in hospital admission. Multimodality magnetic resonance imaging (MRI) scan showed nodular abnormal signals at the right corpus cavernosum root. Subsequently, selective arterial interventional angiography confirmed the MRI findings. Spring coils were then inserted for embolization, and the pseudoaneurysm, fistula, and priapism disappeared. Outcomes: Two months after surgery, sexual stimuli could normally cause penile erection, with normal hardness. The patient's sexual life returned to normal 3 months after surgery. Conclusion: Multimodality MRI is very effective in detecting high blood flow priapism. Its application would improve the clinical management of this ailment. Introduction Priapism refers to a state of continuous penile erection exceeding 4 hours, independent of sexual desire or stimulation, with an incidence approximating 1.5/100,000. It comprises the low-flow (ischemic, painful) and high-flow (nonischemic, painless) types. [1] Priapism represents a common urologic emergency, but highflow penile priapism (HFP) caused by trauma is very rare (about 15% of all priapism cases) in clinic. [2,3] Therefore, HFP diagnosis and treatment are still not standardized. Multimodality magnetic resonance imaging (MRI) as an important examination method is noninvasive and easy to operate, and could directly display the pseudoaneurysm and penile artery-cavernous fistula caused by penile artery damage. In this study, a patient with post-traumatic HFP was reported. Multimodality MRI [T1-weighted image (T1WI), FS-T2WI, diffusion-weighted imaging (DWI), apparent diffusion coefficient (ADC), and enhanced scan] was used for diagnosis, which was confirmed by selective arterial interventional radiography. Case report A physically healthy 29-year-old male patient with no history of related functional diseases was admitted to the urology department of our hospital on August 01, 2019, due to "persistent penile erection caused by a straddle injury for more than 1 month and aggravation for 1 day." Perineal numbness was felt immediately after the straddle injury, and perineal pain occurred 10 minutes later. The perineum and penis swelled gradually, accompanied by persistent penile erection without sexual stimulation, with a hardness of about grade 2 according to the Erection Hardness Grading Scale (EHGS). EHGS scores range from 0 (penis not enlarged) to 4 (penis completely hard and fully rigid). [4] There was no obvious improvement or aggravation before and after urination. The patient also had local congestion, but no open bleeding, dysuria, blood in the urine, painful urination, or fever. On July 17, 2019, the patient underwent Doppler ultrasonography at a local hospital, which indicated swollen corpus cavernosum, but a clear diagnosis could not be reached. The patient did not pay much attention and took overthe-counter anti-inflammatory drugs (specific information unknown), and perineal pain and congestion gradually improved. However, the erectile state of the penis remained unchanged, and there was pain after pressing, with no significant improvement. The preceding day, the patient had another straddle injury at work, and erection hardness was increased, resulting in hospital admission. At admission, examination showed normal penile development; the penis was in an erectile state, with a hardness of about grade 2. The middle part of the penis was dorsally curved, with pain upon pressing. The skin color of bilateral scrotum was normal, and the testicles and epididymides were normal. Routine urine test showed 8 erythrocytes and 28 leucocytes in urine, with no obvious abnormality in blood routine. Blood gas analysis showed a pH of 7.445, PCO 2 at 34.3 mm Hg, PO 2 at 145.0 mm Hg, and SO 2 at 99.3%, indicating that the penis was in a highflow state. Upon admission, multimodality MRI scan showed nodular abnormal signals at the right corpus cavernosum root. T1WI was dominated by high signals, accompanied by internal low-signal shadows. FS-T2WI showed uneven circular nodular high signals, surrounded by low-signal shadows. The DWI revealed obvious high signals, and the ADC values indicated low signals, lesion size approximating 1.2 Â 1.8 cm. Enhanced scanning showed significant enhancement within and around the above abnormal signals. The right corpus cavernosum was readily displayed, with continuous enhancement. The right and dorsal penile arteries were slightly thickened, and the left penile artery and penile corpus cavernosum showed no abnormal contrast enhancement. In addition, penile shape and size showed no overt abnormalities. Subacute hematoma of penile artery pseudoaneurysm at the right corpus cavernosum root was diagnosed, with penile arterycavernous fistula (Fig. 1). Subsequently, the patient underwent selective arterial interventional angiography, revealing right penile artery pseudoaneurysm, with a diameter of about 2.0 cm, as well as thrombus generation and cavernous fistula formation, which confirmed the diagnosis achieved by MRI. Two 2.0 Â 5.0 mm and one 3.0 Â 2.5 mm spring coils (Boston Scientific Corporation: 300 Boston Scientific Way, Marlborough) were then inserted for embolization. After successful embolization, the pseudoaneurysm and fistula disappeared (Fig. 2). Priapism also disappeared, and erection hardness changed from continuous grade 2 to grade 0. Two months after surgery, sexual stimuli could normally cause penile erection, whose hardness was normal (grade 4). The patient's sexual life returned to normal 3 months after surgery. The institutional review board of our hospital approved the study. Written informed consent was obtained from the patient for publication of this case report and accompanying images. Discussion Priapism is a rare pathological erectile state, which could occur at any age, including newborns. Children aged 5 to 10 years and adults aged 20 to 50 years are the age groups most affected. [5] Compared with low-flow priapism, high-flow priapism is rarer and mostly caused by trauma. After cavernous artery injury, arterial blood flows directly from the injured site to the sinus in the corpus cavernosum without returning to the vein through the spiral artery, forming a persistent high-flow state. Other causes include hereditary metabolic disorders, hematological diseases, local vascular malformations and spontaneous rupture of angiomas. [6] The current case was caused by trauma. Currently, the diagnosis of high-flow priapism is mainly based on medical history, physical examination, laboratory examination (eg, blood gas analysis of the corpus cavernosum), and auxiliary examination such as color Doppler ultrasound and angiography. Most patients with high-flow priapism have a history of perineal trauma. Physical examination shows incomplete erection of the penis, which could be completely erected if stimulated. Blood gas analysis of the corpus cavernosum is an effective method for diagnosing this disease. [7] Because blood in the corpus cavernosum originates from the ruptured internal perineal artery or the cavernous artery, it is bright red. Blood gas indexes are close to arterial blood levels, with increased pathological arterial blood flow. Color Doppler ultrasound in patients with carotid-cavernous fistula could show colored blood flow signals, with the corpus cavernosum revealing arterial spectrum. Chiou et al [8] proposed that color Doppler ultrasound is useful for assessing relief in arteriogenesis and veno-occlusion and making decision for subsequent therapy. To date, multimodality MRI has not been performed to diagnose traumatic priapism. [9] MRI mainly depends on tissue proton mass and movement in the magnetic field. After trauma, the cavernous artery is injured, and blood accumulates in the sinus of the corpus cavernosum. Cases with arterial-cavernous fistula could also form pseudoaneurysms. The MRI sequence could clearly show the complex pathological changes and blood hemoglobin modifications after bleeding: oxygenated hemoglobin → deoxyhemoglobin → methemoglobin → hemosiderin. [10] Changes in signal intensity of the lesion could also be observed. The present patient had a hematoma and pseudoaneurysm after trauma, and was in the transition from deoxyhemoglobin to methemoglobin at the time of admission, which reflects the middle and late stages of subacute hemorrhage. Therefore, T1WI was dominated by high signals, accompanied by internal low-signal shadows, whereas FS-T2WI showed uneven circular nodular high signals, surrounded by lowsignal shadows; DWI showed obvious high signals, while ADC values indicated low signals. Contrast-enhanced MRI showed significant enhancement inside and around the above abnormal signals. Because the right penile artery was injured, the corpus cavernosum was displayed with continuous enhancement. In pediatric HFP, it was suggested that an observation period should be introduced in the management algorithm of HFP, to avoid unnecessary surgical intervention. [11] In conclusion, compared with traditional color Doppler ultrasound and selective arterial interventional angiography, multimodality MRI is more effective in detecting high blood flow priapism. Indeed, it is noninvasive and simple to operate, and allows visualization of pseudoaneurysms and penile arterialcavernous fistula caused by penile arterial injury, accurately reflecting the pathological distribution of blood and clearly diagnosing high blood-flow priapism. Therefore, multimodality MRI is expected to become a routine examination technique with improved clinical diagnosis of high blood-flow priapism. Its application would improve the clinical management of this ailment.
2020-10-10T13:07:11.376Z
2020-10-09T00:00:00.000
{ "year": 2020, "sha1": "e33616e6f2ffa4ac9fa0c3cb6c6a971d4237ccfd", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1097/md.0000000000022618", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9536d67703909b25c16e2504acf4260522e10e11", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
249854673
pes2o/s2orc
v3-fos-license
Tipping the balance: toward rational combination therapies to overcome venetoclax resistance in mantle cell lymphoma Mantle cell lymphoma (MCL), an aggressive, but incurable B-cell lymphoma, is genetically characterized by the t(11;14) translocation, resulting in the overexpression of Cyclin D1. In addition, deregulation of the B-cell lymphoma-2 (BCL-2) family proteins BCL-2, B-cell lymphoma-extra large (BCL-XL), and myeloid cell leukemia-1 (MCL-1) is highly common in MCL. This renders these BCL-2 family members attractive targets for therapy; indeed, the BCL-2 inhibitor venetoclax (ABT-199), which already received FDA approval for the treatment of chronic lymphocytic leukemia (CLL) and acute myeloid leukemia (AML), shows promising results in early clinical trials for MCL. However, a significant subset of patients show primary resistance or will develop resistance upon prolonged treatment. Here, we describe the underlying mechanisms of venetoclax resistance in MCL, such as upregulation of BCL-XL or MCL-1, and the recent (clinical) progress in the development of inhibitors for these BCL-2 family members, followed by the transcriptional and (post-)translational (dys)regulation of the BCL-2 family proteins, including the role of the lymphoid organ microenvironment. Based upon these insights, we discuss how rational combinations of venetoclax with other therapies can be exploited to prevent or overcome venetoclax resistance and improve MCL patient outcome. Leukemia (2022) 36:2165-2176; https://doi.org/10.1038/s41375-022-01627-9 BACKGROUND Mantle cell lymphoma (MCL) is a rare, but aggressive B-cell lymphoma, defined by the translocation t (11;14), resulting in the constitutive overexpression of cyclin D1 [1]. The disease comprises 3-10% of all non-Hodgkin lymphomas (NHL) and the median age at diagnosis is 60-65 years. MCL is thought to combine the unfavorable features of both indolent and aggressive NHL subtypes, as it is incurable with conventional chemoimmunotherapy and it has a more aggressive disease course [2]. Following relapse upon standard chemoimmunotherapy, with or without autologous transplant, patients are currently being treated with other chemotherapy regimens or with recently developed targeted therapies such as the Bruton's tyrosine kinase (BTK) inhibitor ibrutinib or the recently approved anti-CD19 chimeric antigen receptor (CAR) T-cell therapy [2]. However, after failure of these salvage therapies, treatment options are strongly reduced. Targeting apoptosis using the B-cell lymphoma-2 (BCL-2) inhibitor venetoclax is a promising novel therapeutic approach for MCL, with overall response rates (ORR) of 50-75% in early clinical trials, depending upon the number and the type of pretreatments the patients received. However, eventually most patients still relapsed, and complete remission (CR) was only achieved by 18-21% of patients [3][4][5]. Combining venetoclax with other targeted therapies or chemotherapy regimens might overcome this resistance. Currently, several clinical trials are ongoing in MCL evaluating venetoclax in combination with current standard treatments as lenalidomide or bendamustine (e.g., NCT03295240, NCT03523975). The development of rational combinations based on the underlying mechanisms of resistance are expected to be more efficient and successful. In this review, based upon insights into the underlying mechanisms of venetoclax resistance, including the role of genetic alterations, transcriptional and (post-)translational regulatory processes, and microenvironment-derived stimuli, we will present potential combination therapies to prevent or overcome venetoclax resistance in MCL. VENETOCLAX RESISTANCE Several mechanisms underlying venetoclax resistance have been described, with high levels of the alternative anti-apoptotic proteins myeloid cell leukemia-1 (MCL-1) and/or B-cell lymphoma-extra large (BCL-X L ) as most outstanding determinants of resistance [6][7][8][9]. These anti-apoptotic proteins serve as a buffer for the released pro-apoptotic BH3 proteins, preventing the activation and oligomerization of BAX and BAK (Fig. 1). Elevated MCL-1 or BCL-X L levels can be caused by genetic aberrations and microenvironmental interactions or, in the case of secondary resistance, due to prolonged venetoclax treatment. In MCL cell lines rendered venetoclax-resistant by continuous exposure to venetoclax, upregulation of MCL-1 and, to a lesser extent, of BCL-X L was observed as compared to their sensitive parental counterparts [6,7,9]. Notably, comparison of samples from chronic lymphocytic leukemia (CLL) patients before and after venetoclax treatment also revealed upregulation of at least one of the antiapoptotic proteins by venetoclax treatment [10,11]. Expression levels of the anti-apoptotic proteins alone is not sufficient to dictate venetoclax sensitivity, the level of occupation of BCL-2 by pro-apoptotic proteins such as BIM also determines the sensitivity to venetoclax. If BCL-2 is occupied by BIM, as is often the case in MCL, BIM can be immediately released upon venetoclax exposure and trigger cell death [7][8][9]. These cells are so-called primed for death and show better venetoclax responses. A common mechanism of resistance for other targeted therapies is the acquisition of mutations in (the binding site of) the target of the inhibitor. However, for MCL, only one patient with mutations in the BH3-binding groove of BCL-2 has been reported yet. For CLL, with higher numbers of patients relapsed on venetoclax, BCL2 mutations have been more frequently observed, mostly after prolonged treatment duration [6,7,[12][13][14]. These mutations, e.g., G101V, D103X and V156N, specifically reduce venetoclax binding to BCL-2, while BH3 protein binding was not affected. However, BCL2 mutations are presented at rather low allele frequencies and it remains unclear whether these infrequent subclones might render the whole malignant population resistant. These mutations are not found in large genomic analyses of biopsies from MCL and CLL patients relapsed from venetoclax plus ibrutinib or venetoclax single-therapy respectively, probably due to too low-resolution sequencing [5,11,15,16]. But these studies did reveal several other genetic aberrations in relapsed patients, involving CDKN2A/B, CCND1, TP53, NOTCH1/2, ATM, KMT2D and SMARCA2/4, indicating a role for cell cycle regulation and chromatin remodeling. Nonetheless, no pattern was observed in clonal evolution of these resistant clones and the exact role of each individual genetic abnormality in conferring resistance to venetoclax is not clear yet, suggesting that venetoclax resistance is not solely driven by any particular single nucleotide variation, but rather involves complex changes. In addition to mutational status and involvement of BCL-2 family members, regulators of energy metabolism have been identified as drivers of venetoclax resistance in CLL and diffuse large B-cell lymphoma (DLBCL) [11]. In MCL cells, no such relation between metabolism and venetoclax sensitivity has been reported yet, although metabolites can regulate expression or interactions of BCL-2 family proteins in several cell models, indicating the potential role of energy metabolism in venetoclax resistance [8]. In conclusion, primary venetoclax resistance in MCL is mainly caused by elevated levels of the anti-apoptotic proteins MCL-1 and/or BCL-X L or by decreased priming of BCL-2. In the case of secondary resistance, upregulation of MCL-1 and BCL-X L have also shown to be of great significance, while acquisition of mutations in the target of the inhibitor is rarely observed. Pending mechanistic insight into the role of metabolism, inhibition of MCL-1 and BCL-X L , either directly or by targeting upstream regulators, might be most achievable in order to overcome venetoclax resistance in MCL. INHIBITION OF MCL-1 AND/OR BCL-X L Combining venetoclax with MCL-1 and/or BCL-X L inhibition demonstrated strong synergy in several preclinical models [8,9]. However, whereas BCL-2 inhibition is already FDA-approved, the search for safe, effective, and selective MCL-1 and BCL-X L inhibitors has proven challenging. In this part, the current status in the development of inhibitors of MCL-1 and BCL-X L will be discussed. MCL-1 inhibition In the past years, several MCL-1-specific BH3-mimetics have entered development, although none of them successfully completed clinical trials yet, due to the key role of MCL-1 in cardiac, neural and hepatic cell survival [6,7,17,18] Fig. 1 Classification of BCL-2 family proteins and interactions among BCL-2 family proteins. A There are three main types of BCL-2 family proteins: anti-apoptotic proteins, pro-apoptotic BH3-only proteins and pro-apoptotic effector proteins. Anti-apoptotic proteins contain all four BH domains, while the pro-apoptotic proteins lack certain domains. Most BCL-2 members also have a transmembrane (TM) domain for anchoring to organelles. B Cellular stress induces upregulation of BH3-only proteins. The sensitizer proteins interact with the anti-apoptotic proteins, disrupting their inhibition of BAX/BAK. Activator proteins can also interact with anti-apoptotic proteins, but they can also directly stimulate BAX and BAK to oligomerize and form a pore. C Interactions between the subtypes of BCL-2 family proteins. All of the anti-apoptotic proteins can interact with all effectors. Furthermore, they can all interact with the activator BH3-only proteins BIM, PUMA, and Bid. The sensitizer BH3-only proteins have a more selective binding pattern: NOXA only targets MCL-1 and BFL-1, HRK only targets BCL-X L , BAD and Bmf are able to antagonize BCL-X L , BCL-2, and BCL-W, and Bik interacts with all anti-apoptotic proteins except BFL-1. MOMP mitochondrial outer membrane permeabilization. may involve different (and safer) administration: encapsulation of the MCL-1 inhibitor S63845 in tumor-targeted nanoparticles allowed 3.5-fold reduction in drug dose and more specific drug delivery in a DLBCL xenograft model, minimizing toxicities [19]. BCL-X L inhibition The development of BCL-X L inhibitors has been hampered by severe on-target platelet toxicity [6,9]. Several strategies have been used to mitigate the thrombocytopenia effect of BCL-X L inhibitors, such as the use of proteolysis targeting chimera (PROTAC), antibody-drug conjugates, or prodrugs targeting BCL-X L . PROTACs link a target molecule to a specific E3 ubiquitin ligase, thereby promoting ubiquitination of the target protein. By linking BCL-X L to an E3 ubiquitin ligase which is poorly expressed in platelets, thrombocytopenia might be prevented. Currently, two BCL-X L targeting PROTACs have been reported, XZ424 and DT2216, and both show in vitro potent cytotoxicity against tumor cells while sparing platelets [20]. Recently, DT2216 has entered clinical trials for relapsed and/or refractory solid tumors (NCT04886622). Apart from PROTACS, the first BCL-X L -targeting antibody-drug conjugate, ABBV-155, has entered phase I clinical trials (NCT03595059), even as the pro-drug APG-1252/palcitoclax (NCT03080311, NCT04210037) and the intravenous dendrimer conjugate AZD-0466 (NCT04214093, NCT04865419) [20]. However, no results for these trials have been reported yet and so BCL-X L inhibition is not yet ready for clinical use. In conclusion, direct inhibition of MCL-1 or BCL-X L to sensitize MCL cells to venetoclax is not yet clinically safe. Therefore, targeting upstream positive regulators of MCL-1 or BCL-X L or negative regulators of pro-apoptotic BH3-only proteins to increase the apoptotic priming might provide a promising alternative. To identify these upstream regulators, the next sections focus on the regulation of BCL-2 family proteins in healthy B cells and on their dysregulation in MCL. (DYS)REGULATION OF BCL-2 FAMILY PROTEINS IN MCL In healthy B cells the levels of the apoptotic proteins are tightly regulated via several key signaling pathways (Fig. 2). These pathways can be triggered by B-cell receptor (BCR) activation or microenvironmental stimuli and cellular stressors which induce alterations in transcriptional or (post-)translational regulation of the BCL-2 family members. In malignant B cells this regulation is often disturbed, due to genetic abnormalities and/or increased BCR/microenvironmental signaling. In this section, the key regulators of the BCL-2 family members are reviewed, in the context of healthy B cells as well as in the context of MCL. Next to these aberrations, TP53 dysregulation, either by 17p deletion or TP53 mutation, is observed in 20-40% of MCL patients [21,22]. Repression of p53 results in reduced transcription of proapoptotic proteins such as BAX and PUMA in response to cellular stress, particularly (chemotherapy-induced) DNA damage, as well as in reduced inhibition of BCL-2 and BCL-X L , reduced BAX and BAK activation and reduced BCL-2 transcription suppression ( Fig. 2A) [9,24]. Other genetic alterations often observed in MCL, e.g., those involving SOX11 and cyclin D1, do not affect BCL-2 protein family expression [25]. Intracellular regulatory pathways The altered BCL-2 family expression profile in malignant B cells also arises from increased activation of key regulatory pathways of BCL-2 family members compared to healthy B cells (Fig. 2). This is mainly caused by the distinct composition of the lymphoid microenvironment of malignant B cells. Here, first, the key regulatory pathways of BCL-2 family members in B cells in general will be briefly described, followed by the effect of the malignant lymphoid microenvironment on BCL-2 family protein expression in MCL. JAK/STAT signaling. Lastly, Janus kinase (JAK)/signal transducer and activator of transcription (STAT) signaling is a major regulator of BCL-2 family proteins. Activation of JAK and the subsequent translocation of STAT induces transcription of MCL-1 and BCL-X L (Fig. 2D) [26,31]. Activated JAK can also interact with previously mentioned signaling pathways, as it can activate PI3K and ERK signaling [31]. Microenvironmental regulation of BCL-2 family expression The key regulatory pathways of the BCL-2 family members in B cells can be activated by signals from their microenvironmental bystander cells (Fig. 3). The composition of the lymphoid microenvironment of malignant B cells is distinct from that of healthy B cells, thereby affecting the apoptotic balance. In this section, we discuss the effect of the malignant lymphoid microenvironment on BCL-2 family protein expression in MCL. and MCL-1, although downregulation of pro-apoptotic proteins has also been observed ( Fig. 2A-C) [32][33][34]. The soluble factors primarily activate the JAK/STAT signaling, resulting in transcription of the anti-apoptotic proteins (Fig. 2D). Stromal cells. MCL cells also bi-directionally interact with stromal cells, thereby shifting the expression profile of the stromal cells toward a pro-tumor profile [32]. These stromal cells in turn prevent apoptosis of MCL cells via integrin binding to vascular cell adhesion molecule-1 (VCAM-1) and intracellular cell adhesion molecule-1 (ICAM-1), via secretion of chemokines such as C-X-C motif chemokine ligand-12 (CXCL12) and -13 and via expression of B-cell activating factor (BAFF). This results in activation of AKT, ERK and NF-κB signaling ( Fig. 2A-C), and thus in upregulation of MCL-1, BCL-X L and BCL-2 [32,33,35,36]. Interestingly, direct adhesion to stromal cells was critical for their full protective effect [36]. Macrophages. The third group of cells which modulate the survival of lymphoma cells are the macrophages. MCL cells attract monocytes and promote them to differentiate into tumorassociated M2 macrophages (TAM) [37,38]. The precise role for TAMs in MCL is still unknown, although recent studies indicate that TAMs induce MCL growth via secretion of, e.g., IL-10 and BAFF [32,37,38]. Interestingly, T cells also play a role by CD40-mediated induction of IL-32β, which in turn instructs the TAMS to secrete BAFF [38,39]. This is in concordance with the more established role of TAMs in CLL, where TAMs induce lymphoma survival via secretion of a proliferation-inducing ligand (APRIL), BAFF, CXCL12 and -13, and Wnt5a, and via stimulation of the BCR and CD38 [40,41]. These stimuli activate NF-κB, AKT and ERK, resulting in upregulation of anti-apoptotic proteins, such as BCL-2, BCL-X L , and BFL-1 and downregulation of BAD ( Fig. 2A-C) [28,42]. Lymphoma cells. Apoptotic tumor cells themselves can also produce signaling molecules that affect apoptosis of their neighboring tumor cells. For example, it has recently been described that apoptotic stress caused by BH3-mimetics in HeLa cells induces fibroblast growth factor (FGF)-2 secretion, which leads to ERK-dependent transcriptional upregulation of prosurvival BCL-2 proteins in the neighboring cells [43]. Whether apoptotic MCL cells also secrete such paracrine survival factors upon stress, remains to be determined. MCL cell Activation PI3K/ERK/ NF-kB pathway Other microenvironmental factors. The lymphoid microenvironment further consists of non-cellular components which affect the apoptotic priming of MCL cells, such as extracellular matrix (ECM), antigens and bacterial epitopes. Binding of malignant B cells to ECM components activates the AKT, ERK, and NF-κB pathways and the concomitant shift in apoptotic priming [32]. Antigens, either bound to TAMs or stromal cells or in suspension, activate the BCR and thereby primarily elevate MCL-1 levels via the AKT pathway and BCL-X L via the NF-κB pathway ( Fig. 2A, B) [32,44,45]. Lastly, microbial epitopes present in the LN, such as CpG and lipopolysaccharides (LPS), activate toll-like receptors (TLRs), resulting in stimulation of amongst others the NF-κB pathway and thus expression of BCL-X L (Fig. 2B) [34,45,46]. Taken together, MCL cells show dysregulation of BCL-2 family members, with elevated levels of the anti-apoptotic proteins, either due to genetic aberrations (BCL-2) or microenvironmental stimuli (MCL-1 and BCL-X L ), and with decreased levels of proapoptotic proteins, mostly caused by genetic aberrations ( Table 1). The microenvironmental effect is supported by gene expression studies, showing upregulation of BCR and NF-κB pathway target genes specifically in MCL cells in the lymph node as compared to peripheral blood [44]. STRATEGIES TO OVERCOME PRIMARY AND SECONDARY VENETOCLAX RESISTANCE Targeting key signaling cascades that engage molecules such as AKT or ERK or disruption of microenvironmental interactions might synergize with venetoclax to induce cell death of the MCL cells, while sparing the platelets and cardiac cells. In this part, we will discuss the most promising strategies to overcome venetoclax resistance. Inhibition of the BCR signalosome Suppression of integrin activation by inhibition of kinases from the BCR signalosome, such as BTK and PI3Kδ, mobilizes MCL and CLL cells from the LN to PB, disrupting the growth-and survivalsupportive microenvironmental interactions (Fig. 4) [47,48]. Mobilized CLL cells obtained from the PB of patients after treatment with the BTK-inhibitor ibrutinib showed reduced MCL-1 and BCL-X L levels and enhanced BIM levels compared to preibrutinib samples [10,49]. It is tempting to speculate that similar effects will be accomplished upon ibrutinib-evoked mobilization of MCL cells. An additional advantage of inhibiting the BCR signalosome is the reduced activity of the downstream NF-κB and AKT pathways and the consequent effects on the regulation of several BCL-2 family members (Fig. 4) [49][50][51]. Moreover, these inhibitors may also directly target the microenvironment: in CLL patients, T-cell activation and proliferation is strongly reduced after inhibition of the BCR signalosome, thereby preventing T-cell-mediated upregulation of MCL-1 and BCL-X L in CLL cells and subsequent venetoclax resistance [52]. Currently, several clinical trials assessing the effectiveness of the combination of venetoclax and ibrutinib are ongoing, and thus far, they show impressive results ( Table 2). The AIM study, a phase II study of ibrutinib and venetoclax in 24 patients with R/R MCL, demonstrated an ORR of 71% and a CR of 63% at 16 weeks of treatment [53]. Another phase II study with additional obinutuzumab treatment included, shows even a small beneficial effect over the AIM study, however, longer follow-up will show if this translates into prolonged remission duration [54]. Treatment with these drug combinations have also demonstrated high response rates in CLL and the first results of the phase III trial of venetoclax and ibrutinib are also promising, firmly establishing the combination of ibrutinib and venetoclax as therapeutic option in MCL [55][56][57]. Inhibition of integrin activation Apart from inhibition of the BCR signaling, CXCR4 antagonists (e.g., plerixafor/AMD3100) or integrin-blocking antibodies (e.g., natalizumab) disrupt microenvironmental interactions and hereby also mobilize lymphoma cells (Fig. 4) [36,58,59]. Furthermore, we have recently demonstrated that targeting of hematopoietic cell kinase (HCK) also impairs adhesion of MCL cells to the ECM and stromal cells [60]. Natalizumab and plerixafor are well tolerated in early clinical trials [59], for HCK inhibitors, no clinical trials have been reported yet. Inhibition of AKT signaling Inhibition of the PI3K/AKT pathway is associated with the reduction of especially MCL-1 levels, but also with reduction of BCL-X L levels and accumulation of BAD and BIM (Fig. 4) [61,62]. Moreover, both intrinsic and acquired venetoclax resistance have been associated with enhanced AKT activation, and concomitant susceptibility to PI3K/AKT inhibition [62][63][64]. Therefore, synergism between PI3K/AKT/mTOR inhibitors and venetoclax have frequently been investigated and observed in vitro [8,33,[61][62][63][64]. Whereas clinical development of AKT and mTOR inhibitors have been hampered due to toxicities and limited efficacy, several clinical trials with PI3K inhibitors are ongoing. For MCL, no results have been reported yet for the trials combining venetoclax with PI3K inhibitors (NCT03379051, NCT04939272; Table 2), but for R/R CLL early results are encouraging, with ORRs of 89% (8/9) and 85% (11/13) for venetoclax combined with either duvelisib (PI3Kδ/γ inhibitor) or with umbralisib (PI3Kδ inhibitor) and ublituximab (anti-CD20) respectively. Although high rates of neutropenia and thrombocytopenia were observed, infections were infrequent, and phase II trials are currently ongoing [65,66]. Inhibition of protein translation The synergy between AKT inhibition and venetoclax can partly be explained by mTOR-mediated reduction in translation of amongst others MCL-1 (Fig. 4). To prevent AKT-mediated toxicities of a combination therapy, inhibition of translation itself might also be a good approach to potentiate venetoclax activity in lymphoma cells. Indeed, ribosomal inhibition using homoharringtonine (HHT) or disruption of the interaction between eIF4E and eIF4G using SBI-0640756 or 4EGI-1 reduces MCL-1 and BCL-X L levels in MCL cell lines and primary CLL cells and potentiated the activity of BH3mimetics [67][68][69]. Furthermore, we have recently established that targeting casein kinase 2 (CK2) in MCL cell lines and primary MCL samples represses MCL-1 translation and thereby synergizes with venetoclax (Thus et al. submitted). Clinical data in which venetoclax is combined with disruption of translation are currently lacking. Inhibition of ERK signaling Inhibition of the ERK pathway in combination with venetoclax is also an interesting option, since this pathway is a major regulator of BCL-2 family proteins as well (Fig. 2C). Whereas for AML the combination of venetoclax with ERK pathway inhibitors has frequently been reported and is currently evaluated in clinical trials, for mature B-cell lymphomas this combination is poorly studied. Still, the few reports in which this combination is studied do report synergy in CLL, multiple myeloma (MM), and DLBCL [70][71][72]. However, this is not observed in MCL: whereas the MEK1/2 inhibitor trametinib synergized with venetoclax in most CLL and MM cell lines and primary CLL samples, this only occurred in two out of seven MCL cell lines [72]. Thus this combination may hold promise for clinical development in CLL and MM, but most likely not for MCL. Inhibition of NF-κB activation Since NF-κB signaling has a central role in the transcription of the anti-apoptotic BCL-2 family proteins (Fig. 2B), combining venetoclax with inhibition of NF-κB signaling might also be a promising approach to increase the efficacy of venetoclax in patients. Indeed, prevention of IκB degradation in MCL cell lines and primary samples downregulated BCL-X L and in some cases also MCL-1 or BCL-2 [34,41,73]. Furthermore, inhibition of the non-canonical NF-κB pathway using an NF-κB inducing kinase (NIK) inhibitor in MCL and CLL reduced BCL-X L levels and led to increased vulnerability to venetoclax [38,74]. These results demonstrate the opportunities for such combination strategies, however, whereas several NF-κB inhibitors have been developed over the years, none of them have entered clinical practice yet, due to severe toxicities [75]. Recently, a phase II clinical trial has been launched in which NF-κB signaling is indirectly targeted in R/R CLL, the BeliVeR trial (NCT05069051). This triple combination of belimumab, a BAFF-neutralizing antibody which is FDA-approved for systemic lupus erythematosus (SLE), venetoclax and rituximab prevented BAFFinduced venetoclax resistance in CLL in vitro [76]. If indeed beneficial for CLL patients, it would also be worthwhile to evaluate this for MCL. In addition, it would be worthwhile to evaluate BCMA-targeted antibody-drug conjugates (ADCs) in combination with venetoclax, as BCMA is one of the receptors for BAFF (Fig. 3) and is expressed on primary MCL in the LN [77]. The safety and tolerability of several of these ADCs are already under evaluation in clinical trials [78]. Inhibition of CDKs Inhibition of cyclin-dependent kinases (CDKs) also provides a potential strategy to reduce MCL-1 levels and thereby increase venetoclax sensitivity in lymphoma cells. Several CDKs regulate MCL-1 levels, such as CDK2, which phosphorylates MCL-1, preventing its ubiquitination and binding to BIM, and CDK7 and CDK9, which are involved in the transcriptional regulation of MCL-1 by interacting with RNA polymerase II [79]. Due to the short half-life of MCL-1, MCL-1 is particularly susceptible to disruption of transcriptional activity. Moreover, venetoclaxresistant MCL cells show transcriptional remodeling and thereby increased susceptibility to for example CDK7 inhibition, emphasizing the potential of combining venetoclax with CDK inhibitors (Fig. 4) [5,16,80]. Although the rationale behind combining CDK inhibitors with venetoclax is strong and the combination synergizes in vitro, pan-CDK inhibitors such as alvocidib/flavopiridol and dinaciclib show low levels of clinical activity and/or have been plagued with toxicity in vivo [79]. To circumvent this toxicity, more specific and structurally different inhibitors have recently been developed, which induce MCL-1 reduction and sensitize cells to venetoclax in preclinical models [79][80][81][82]. For example, fadraciclib, a CDK2/9 inhibitor, completed a phase II trial in solid cancer patients without severe toxicities and is currently tested in R/R CLL in a phase I trial combined with venetoclax (NCT03739554) [82]. Thus, sensitizing MCL cells to venetoclax by using CDK inhibitors might be effective when safe CDK inhibitors have been designed. Epigenetic regulation Another approach to sensitize MCL cells to venetoclax is by targeting the expression of BCL-2 family proteins using epigenetic inhibitors. Inhibition of bromodomain extra-terminal (BET) proteins, histone deacetylases (HDACs) or protein arginine methyltransferases (PRMT5) affect expression of BCL-X L , MCL-1 and BIM and synergize in vitro and in MCL mouse models with venetoclax ( Fig. 4) [83][84][85][86]. Phase I clinical trials of the combination of venetoclax and the BET inhibitor R06870810 or the dual HDAC and PI3K inhibitor fimepinostat (CUDC-907) in R/R DLBCL showed manageable safety profiles and durable antitumor activities [87,88], but in MCL no clinical trials have been initiated yet. Inhibition of metabolism Since regulators of energy metabolism have also been established as drivers of venetoclax resistance and since cells which progress on venetoclax treatment show increased oxidative phosphorylation and AMPK signaling compared to before progression, targeting the metabolism might also potentiate venetoclax activity in MCL cells [8,11]. Indeed, inhibition of glutamine uptake and its downstream pathways, the AMPK pathway or the electron transport chain, all overcome venetoclax resistance in several MCL cell lines and primary CLL samples [11,[89][90][91]. Moreover, in retrospective analyses of three clinical studies of CLL, background statin use was associated with an enhanced number of complete responses to venetoclax, although this was ascribed to upregulation of PUMA rather than to a metabolic pathway [90]. Despite the potential of combining inhibition of metabolism with venetoclax, no such clinical trials have been launched for MCL. For CLL, a phase 1 trial that combines venetoclax with the potent HMG-CoA reductase inhibitor pitavastatin has recently been initiated (NCT04512105). If results are promising, it might be worthwhile to extend this trial to MCL. CONCLUSION AND FUTURE DIRECTIONS To increase the efficacy of venetoclax therapy in MCL, combining venetoclax with inhibitors targeting the other anti-apoptotic BCL-2 proteins MCL-1 and BCL-X L would be an excellent strategy. Although active development of MCL-1 and BCL-X L inhibitors is ongoing, it is uncertain if these inhibitors will maintain a sufficient safety profile for widespread use. Therefore, indirect targeting of the expression of these proteins could be an attractive alternative, for example by the use of BCR signalosome or NF-κB inhibitors (Fig. 5). Of the various strategies to sensitize MCL cells to venetoclax as discussed in this review, the most advanced strategy is venetoclax combined with BCR signalosome inhibitors, specifically BTK and PI3K inhibitors. These inhibitors efficiently reduce expression of MCL-1 and BCL-X L without causing severe toxicities such as thrombocytopenia, as they specifically target B cells. The other rational combinations discussed show promising preclinical results, however most of them are not (yet) clinically approved. With only a few targeted therapies being clinically approved yet, venetoclax is currently extensively evaluated in combination with therapies with a less strong mechanistic rationale, such as anti-CD20 therapy and (targeted) chemotherapy (Table 2). Anti-CD20 therapy, e.g., rituximab and obinutuzumab, showed impressive results in CLL in clinical trials and is recently FDA-approved for CLL [92,93]. The precise underlying mechanism is unknown, although anti-CD20 antibodies have been shown to counteract CD40induced resistance in CLL cells in vitro irrespective of BCL-2 family member alterations [94]. In MCL, venetoclax and anti-CD20 therapy is often being evaluated in combination with chemotherapeutics or with targeted therapies such as BTK inhibition ( Table 2). As in CLL, early results show high response rates and good tolerability in various MCL patient groups, although follow-up time is short [95][96][97][98][99]. To reduce side-effects of standard chemotherapeutics, a phase II trial which combines venetoclax with rituximab and a targeted chemotherapeutic, polatuzumab vedotin, has been launched (NCT04659044; Table 2). Other drugs which are interesting to evaluate in combination with venetoclax are the multi-kinase inhibitors sorafenib and sunitinib. These inhibitors are already FDA-approved for solid tumors and have been shown to reduce MCL-1 levels in CLL cells, and in the case of sunitinib also BCL-X L and BFL-1 levels [100,101]. Sunitinib was identified in a screen with 320 kinase inhibitors as most effective synergizer with venetoclax and can also partially overcome CD40L-induced venetoclax resistance, highlighting the opportunities of this drug combination [101]. Which of the rational combinations will be most successful, will also vary per patient. Ideally, personalized drug combinations will be established using drug combination screens ex vivo, or predictive biomarkers or mutation profiles can be used, however, this is not feasible yet. To gain insight into which combination of drugs is best for a specific patient, more clinical trials and more studies into potential biomarkers are needed. Tipping the balance of MCL toward venetoclax sensitivity by rational combination therapies. In the basal, untreated situation, the apoptotic balance is tilted toward anti-apoptotic proteins (upper left panel). Venetoclax treatment removes BCL-2 from the balance, resulting in apoptosis in the sensitive cells (upper right panel), but the balance is still tilted toward anti-apoptotic proteins in resistant cells, due to for example enhanced MCL-1 or BCL-X L levels (lower left panel). Combining venetoclax with other targeted therapies that may either decrease the amount of MCL-1 or BCL-X L or increase the levels of the pro-apoptotic proteins will tip the balance toward apoptosis. Long-term venetoclax treatment or microenvironmental stimulation counteract these effects by increasing the anti-apoptotic effects and decreasing the pro-apoptotic effects (lower right panel).
2022-06-20T13:22:16.184Z
2022-06-20T00:00:00.000
{ "year": 2022, "sha1": "2e6818b894c68e0764510f7180603ec5853e23a3", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41375-022-01627-9.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "73bc95f2396a8d9cc305161a781bb5ed33db8c2f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237625492
pes2o/s2orc
v3-fos-license
Immunogenicity to biological drugs in psoriasis and psoriatic arthritis Monoclonal antibodies or fusion proteins, defined as biological drugs, have modified the natural history of numerous immune-mediated disorders, allowing the development of therapies aimed at blocking the pathophysiological pathways of the disease, providing greater efficacy and safety than conventional treatment strategies. Virtually all therapeutic proteins elicit an immune response, producing anti-drug antibodies (ADAs) against hypervariable regions of immunoglobulins. Immunogenicity against biological drugs can alter their pharmacokinetic and pharmacodynamic properties, thereby reducing the efficacy of these drugs. In more severe cases, ADAs can neutralize the therapeutic effects of the drug or cause serious adverse effects, mainly hypersensitivity reactions. The prevalence of ADAs varies widely depending on the type of test used, occurrence of false-negative results, and non-specific binding to the drug, making it difficult to accurately assess their clinical impact. Concomitant use of immunosuppressors efficiently reduces the immunogenicity in a dose-dependent manner, either by decreasing the frequency of detectable ADAs or by delaying their appearance, thereby enhancing the effectiveness of biological therapies. Among the new therapeutic strategies for the management of psoriasis, biological agents have gained increasing importance in recent years as they interrupt key inflammation pathways involved in the physiopathology of the disease. Reports regarding ADA in new biologics are still scarce, but the most recent evidence tends to show little impact on the clinical response to the drug, even with prolonged treatment. It is therefore essential to standardize laboratory tests to determine the presence and titles of ADAs to establish their administration and management guidelines that allow the determination of the real clinical impact of these drugs. ' INTRODUCTION In the last decade, several new treatment methods have been developed to attack various physiological mechanisms underlying inflammatory diseases. Monoclonal antibodies or fusion proteins, defined as biological drugs, have modified the natural history of numerous immune-mediated disorders, such as rheumatic disease, inflammatory bowel disease, systemic vasculitis, and psoriasis (1). These agents have allowed the development of therapies targeting pathophysiological pathways of diseases with even greater efficacy and safety compared to conventional treatment strategies (2). As they are exogenous molecules to the immune system, drug-associated immunogenicity could develop, leading to a significant impact both on the efficacy and safety of the treatment as well as the compliance and individualization of these therapies in certain patients. The immune response generated against monoclonal antibody therapies can result in low circulating drug levels, loss of therapeutic efficacy, poor drug survival, and/or associated adverse events, such as infusion reactions. Several factors can influence the clinical impact of this immunogenicity, and their identification can be useful for the optimization and personalization of biological therapies. Concomitant immunosuppressive therapy can significantly reduce the frequency of detection of anti-drug antibodies (ADAs) or delay their appearance (3). In this regard, it has been shown that the concomitant administration of methotrexate (MTX) or azathioprine (AZA) reduces the immunogenicity in a dependent manner, mainly with the use of tumor necrosis factor (TNF) inhibitors (4). ' WHAT ARE MONOCLONAL ANTIBODIES? Monoclonal antibodies (mcABs) are proteins produced in vitro using recombinant techniques from a single clone of B lymphocytes. They were recognized for the first time in the sera from patients with multiple myeloma, in which the clonal expansion of malignant plasma cells generated high levels of a specific antibody subtype. The fusion of a murine B cell with an immortal myeloma cell generates a hybridoma that produces these antibodies. Murine mcAB has been genetically engineered to produce molecules with a higher proportion of human proteins. Currently, chimeric (65% human), humanized (4 90% human), and fully human (100% human) mcABs are available. The higher the percentage of murine proteins, the greater the ability of mcAB to induce an anti-mouse humoral immune response (HAMA, human anti-mouse antibodies) (1,5). mcAB is intended to mimic or inhibit the action of natural proteins, suppressing only a specific part of the immune system. They block interactions between the target molecules and their ligands, for example, by acting on specific mediators of inflammation or by triggering the lysis of the coated tumor cells. Many mcABs have been developed using recombinant DNA technology, and several are available on the market with a safety profile considered even more favorable than traditional immunosuppressive agents (5). Immunogenicity against biological drugs is manifested by the generation of ADAs that can alter their pharmacokinetic and pharmacodynamic properties, reducing the efficacy of the drug. In more severe cases, ADAs can neutralize the therapeutic effects of the drug or even cause serious adverse effects. Although some factors that contribute to the formation of ADAs are known, the molecular mechanisms by which therapeutic mcABs cause ADAs have not been completely clarified. Humanized mcABs unexpectedly show similar immunogenicity to chimeric antibodies, and based on their greater sequence homology, chimeric mcABs are sometimes more ''human'' than humanized mcABs, demonstrating the participation of other factors, different from the presence of murine genetic sequences, in the development of this immunogenicity (6). ' IMMUNOGENICITY TO BIOLOGICALS Virtually all therapeutic proteins, known as biological drugs, elicit an immune response with the consequent production of ADAs. This phenomenon is the result of a specific adaptive immune response that involves the participation of T and B lymphocytes. Most of these antibodies are directed against the antigen-binding site of therapeutic mcABs and, therefore, neutralize AC. This ADA response explains why fully human antibodies can still be highly immunogenic (7). Two fundamental principles explain the theoretical basis for the immunogenicity of biologic agents: biopharmaceuticals are exogenous in nature (neo-antigens or non-self antigens) and may have little or no similarities with endogenous molecules, preventing the development of immune tolerance, so that the receptor immune system recognizes biological drugs as foreign molecules (8). It is now known that ADAs are predominantly directed against immunoglobulins' hypervariable regions, known as complementarity determining regions (CDRs), which form the antigen-binding site of the therapeutic antibody. In this way, they elicit genuine neutralizing ''anti-idiotypic'' responses by competing with the drug's target molecule (e.g., TNF) for the drug binding site. The neutralization of ADAs directly affects the mechanism of action of the drug by preventing mcAB from binding to its target (9). Non-neutralizing ADAs, which bind to other parts of the drug, can also form immune complexes that can alter the clearance of the biological drug and/or reduce its bioavailability, lowering free drug concentrations. The presence of ADAs may be associated with two main clinical consequences: a reduction in therapeutic efficacy and/or an increased risk of adverse events (AE), mainly hypersensitivity reactions (9). The reason why ADAs are developed in different inflammatory diseases has not yet been elucidated; it could be related to the pathogenic mechanism of the disease itself or to the different degrees of cell activation (10,11). Several studies have revealed the significant impact of immunogenicity on the response to biological drug treatment. Considering that the quantification of ADAs is a challenge, since there are different laboratory tests to evidence them, quantitative data are also difficult to compare among clinical studies. The prevalence of ADAs varies widely depending on the type of tests used as well as the frequency of false negative results and non-specific binding to the drug that can occur in some of them, making it difficult to accurately assess their clinical impact (12). Many factors can influence the immunogenicity findings, including sample handling, time of collection, concomitant medications, and underlying diseases ( Table 1). In addition, methodological differences can substantially affect the results, without the complete understanding of the conditions that produce certain ADA titles. Thus, regardless of the incidence of ADAs, the actual antibody titers and their effects on the pharmacokinetics, efficacy, and safety are the most relevant points to consider (13). Depending on the circulating ADA titers, a reduction in the drug concentration can be clinically significant. In patients with low titers of ADAs, drug concentrations may remain high enough to be effective, while in patients who develop high titers of ADAs, a substantial portion of the drug will be neutralized and is likely to produce a clinical non-response over time (21). The presence of ADA could reduce therapeutic responses by up to 80%, particularly in patients who do not receive concomitant MTX (22). MTX has been shown to be efficient in reducing the immunogenicity in a dose-dependent manner, either by reducing the frequency of detectable ADAs or by delaying their appearance, thereby increasing the effectiveness of biological therapies (4). Despite the fact that most ADAs do not cross-react with other biological agents with different CDR regions, patients generating ADAs to a biological drug will have a greater probability of developing ADAs to a new biological drug. In patients who do not respond to biological agents and who have developed ADAs, it is recommended to switch to a less immunogenic drug, regardless of the mechanism of action (23). The most common AEs associated with the presence of ADAs are hypersensitivity reactions, the severity of which can range from mild to severe. Although immunoglobulin (Ig)-E-type ADAs have been reported, the vast majority of ADAs belong to the IgG class, suggesting an alternative pathway (independent of IgE) of anaphylatoxin production that may or may not nonspecifically activate the mast cells. These ADA-independent cytokine release syndromes can be managed in the short term by stopping the infusion of biological agents, decreasing the infusion rate, or administering histamine blockers and corticosteroids (24). ' BIOLOGICALS IN PSORIASIS AND PSORIATIC ARTHRITIS Psoriasis (PsO) is a chronic, immune-mediated, inflammatory skin and systemic disease that affects approximately 2-3% of the world's population (25). The different phenotypes of this entity are the result of genetic and epigenetic changes, ultimately determining an altered immune function and a dysregulated systemic inflammatory response (26). The chronic nature of the disease requires prolonged systemic therapy to maintain optimal clinical responses. In the last two decades, biological therapies have revolutionized the management of PsO and psoriatic arthritis (PsA), thanks to advances in the understanding of its pathogenesis. The initial PsO trigger is believed to involve the activation of antigenpresenting dermal dendritic cells and the production of interferons (IFN-a and IFN-b), the antimicrobial peptide LL-37 (cathelicidin), and TNF-a by damaged keratinocytes. In addition, the generation, maturation, and recruitment of various inflammatory cells, orchestrated by effector T helper lymphocytes (Th 17 and 22) through mediating cytokines, chemokines, and interleukins (IL), mainly represented by the TNF-a/IL-23/IL-17 axis (27,28). Among the new therapeutic strategies for the management of psoriatic disease (PD), biological agents have gained increasing importance in Genetic factors IL-10 gene polymorphism (fundamental in AB synthesis) in patients with rheumatoid arthritis (RA) treated with the anti-tumor necrosis factor (anti-TNF). Specific human leukocyte antigen (HLA) haplotypes: -HLA DRB1 in antigen-presenting dendritic cells in patients with hidradenitis suppurativa (HS) and adalimumab (ADL), inflammatory bowel disease (IBD), and infliximab (IFX). -HLA DBbeta-11, HLA-DQ-03, and HLA DQ-05 anti-NF alleles. V158F functional polymorphism in one of the FcgammaR genes that affects the AB binding capacity to the drug (eg. IFX in EC) (1). Disease type and activity Immune system (IS) activation: -Due to immunoreactivity of the disease itself. -High expression of costimulatory molecules in dendritic cells that accelerate the production of anti-drug antibodies (ADAs). -B lymphomas in patients with RA treated with rituximab (RTX): ADA in 1-4% Type of inflammatory diseases: -Primary Sjö rgen syndrome + ANCA (+) vasculitis: ADA in 25% of the patients -LES: ADA up to 40% -ADA anti-IFX in patients with RA (+) versus RA (-) = 62.5% versus 37.5%, respectively Reduced disease activity allows higher levels of circulating monoclonal antibodies (mcABs), which may promote immune tolerance Drug-related: Drug dose (and plasma concentration) Lower doses4higher doses: Higher doses of the drug reduce immunogenicity and induce tolerance by depletion of the immune response (14) Route of administration Intradermal or SBC4intravenous: Favor the uptake and presentation by antigen-presenting cells (APCs) Frequency of administration Intermittent treatment4continuous therapy: Continuous administration allows the development of immune tolerance Chemical formulation Molecular structure not identical to endogenous immunoglobulin (Ig) even in fully human mcAB (new epitopes in complementarity determining region (CDR) sequences) due to idiotype/antiidiotype interactions (1). Severe anaphylactic reactions to cetuximab due to non-human carbohydrate residues that crossreact in red meat proteins sensitized patients (beef, pork, or lamb) that induce the formation of IgE-type ADA (15). Post-translational modifications Removal of N-terminal glycosylation of the Fc fragments chains decreases the immunogenicity of mcAB. Impurities in the formulation processes. Danger model: IS responds more to substances that cause harm than exogenous ones. For example, impurities or residues in the processing of biologicals agents (1). Target molecules Anti-IL-6 mcAB, tocilizumab, has low incidence of ADA formation because interleukin (IL)-6 participates in modulating the humoral immune response. mcAB directed at certain target molecules on cell surfaces, would induce greater immunogenicity than those directed against soluble molecules, since the latter require more processing to be finally presented as antigens Rituximab, chimeric anti-CD20 mcAB, selectively depletes CD20 (+) B lymphocytes, but does not affect pre-B or immature B cells lymphocytes, nor does it affect the maturation of memory plasma cells, which prevents the production of ADAs (16). Treatment-related: Treatment duration Short4long treatments: -ADA titers decrease over time in prolonged treatments by inducing immunological tolerance due to continuous exposure to the drug. Treatment interruption Variable response: -Greater development of ADA after temporary suspension of IFX versus continuous administration, without interruptions (39% versus 16%), in patients with Crohn's disease (CD) and ulcerative colitis (UC) (17). recent years, by interrupting key inflammation pathways in patients with PsO and PsA (29,30). ' CLINICAL EVIDENCE With the advent of biologic drugs, the treatment of PD has changed dramatically owing to its high efficacy and tolerable safety. Currently, a variety of biological agents are available for the treatment of PsO and PsA in the long term; therefore, it is essential to understand the potential development of clinically relevant ADAs in the course of therapy (3). Although there are clear differences among varied therapeutic biologic products in terms of reported rates of ADAs, there is no guideline or consensus on an approach for managing these. Making valid comparisons of immunogenicity between different drugs is problematic, since there are different types of laboratory tests for the analysis of these ADAs. Furthermore, the patient population included, as well as the molecular structure of the biological drugs themselves, are highly influential in the prevalence of reported ADAs and the impact they generate on the clinical response ( Anti-TNF Anti-TNF-a agents have demonstrated efficacy both in monotherapy and in combination with disease-modifying antirheumatic drugs (DMARDs) in the treatment of chronic immune-mediated inflammatory diseases, such as rheumatoid arthritis (RA), Crohn's disease (CD), PsO, and PsA (32). However, the immunogenicity of these drugs plays a significant role in the variability of clinical responses among patients with these types of diseases. The clinical impact on the outcome of anti-TNF-a drug treatments in PsO and PsA patients has not yet been completely clarified. Despite the high efficacy rates reported with these agents in PsO, a substantial proportion of patients still experience primary or secondary failure or develop significant side effects, potentially attributable to immunogenicity (33,34). Infliximab (IFX) was the first anti-TNF-a approved by international regulatory agencies for use in patients with PD. It is an IgG1 chimeric mcAB that is administered intravenously. Meanwhile, adalimumab (ADL) and golimumab (GOL) are humanized mcAB, produced by recombinant DNA techniques and administered subcutaneously. Etanercept (ETN) is a fusion protein consisting of two extracellular receptor domains (TNFR2) and an Fc fragment of human IgG1. Certolizumab (CTL), on the other hand, is a humanized Fab fragment conjugated with polyethylene glycol (35). This peculiarity does not allow it to bind to the Fc receptor of fetal IgG, preventing its passage through the placental barrier or into breast milk, making its use safe in pregnant women (36). The immunogenicity of these agents seems to be more related to the specific molecular structure of the anti-TNF-a agent and how it acts as a different immune stimulus. In this way, these ADAs affect the pharmacokinetics (PK) of anti-TNFs by binding to specific idiotypes of the drug, neutralizing their activity, and accelerating the clearance of the antibody-drug complexes by the reticuloendothelial system. Moreover, the existence of inflammatory mechanisms not mediated by TNF may also be responsible for the lack or loss of response to anti-TNF, as well as other factors that significantly affect the PK of anti-TNF, such as body surface, serum albumin concentration, degree of inflammation (TNF levels), and disease severity. The concomitant administration of antimetabolites, such as AZA or MTX, may increase the concentrations of anti-TNFs, reducing the formation of antibodies or the clearance of immune complexes (37). The clinical consequences of the development of ADAs are heterogeneous and include severe allergic/anaphylactic reactions and a reduction or loss of therapeutic efficacy (38). Recently, Pecoraro et al. carried out a systematic review and meta-analysis that included 34 studies, enrolling 4273 patients affected by some autoimmune inflammatory disease under treatment with anti-TNF-a. In this group, the development of ADAs was evidenced in up to 18.6% of cases, with a marked reduction in clinical response (Response rate (RR) 0.43, 95% confidence interval (CI) 0.3-0.63), especially in patients treated with IFX (RR 0.37) or ADL (RR 0.40) (39). A retrospective cohort study from the ABIRISK project recruited a total of 366 patients with RA treated with ADL (n=240) or IFX (n=126). Of these, 92.4% were anti-TNF virgin (n=328/355) and 96.6% were treated with MTX (n=341/353). After a follow-up period of 18 months, ADAs were detected in 19.2% of patients treated with ADL and in 29.4% of patients in the IFX group. The cumulative incidence of ADAs increased over time to 50% and 66.7% for the ADL and IFX groups, respectively, at the end of the study period. The factors associated with a higher risk of developing ADA were a longer duration of disease, RA of moderate activity, and prolonged smoking habit (40). PsO and PsA are other examples in which anti-TNFs, despite their high response rates, fail to demonstrate efficacy (primary failure) or induce significant side effects in a substantial proportion of patients. In placebo-controlled clinical trials, 40-60% of patients with active PsA treated with ADL or IFX and 30-40% of those who received ETN failed to meet the criteria of the American College of Rheumatology (ACR) for a clinical response improvement of at least 20% (ACR20) (33,41). Similarly, between 20% and 50% of patients with plaque PsO do not achieve clinical improvement of at least 75% of their baseline, evaluated using the Psoriasis Area and Severity Index (PASI). It has also been shown that only 75-85% of patients with PsO manage to maintain the long-term PASI75 response with anti-TNF agents used during the first period of treatment (34,42). Regarding the immunogenicity of each drug in this particular group, it is worth mentioning. (43). The factors related to a greater development of ADAs were a dose of 3 mg/kg versus 5 mg/kg, intermittent drug administration regimens or as needed versus scheduled, and no association with MTX as concomitant therapy. On the other hand, the presence of antibodies against IFX was associated with infusion-related adverse reactions only in the retreated group of patients and after an interval of 20 weeks from the last administration (23% in patients positive for ADAs, compared to 8% in patients without ADAs). Furthermore, patients with antibodies were less likely to maintain a response at week 50 of follow-up (43)(44)(45). Similarly, in patients with active PsA treated with IFX, doses of 5 mg/kg were related to ADA production in up to 15.4% of cases, after 54 weeks with the drug. The development of anti-IFX antibodies was more frequent in patients who did not receive associated treatment with MTX at the beginning of the study (26.1% versus 3.6% in the patients who did receive it), showing an inverse correlation with the clinical response. The median percentage improvement in ACR20 for ADApositive patients was lower (21.7%) than in those who did not develop antibodies (33.3%) at the end of the study (46). Adalimumab (ADL): Regarding ADL, the reports are more limited, although several studies mention an incidence of ADAs between 6% and 45%, depending on the technique used for their detection, these would not be neutralizing. The presence of anti-ADL antibodies was linked to a decrease in the efficacy of the drug to achieve PASI75 (23.1% versus 72.7% in ADA-negative patients), with rapid loss of response (PASI o50) at week 52 of follow-up (47). Vogelzang et al. observed that ADL concentrations were significantly lower at 28 and 52 weeks of follow-up in 103 patients with PsA and positive ADAs, correlating in the same way with a lower clinical response. ADL concentrations reflect the amount of drug available in the serum that binds to its target molecule. If no free drug concentration is available or insufficient, inflammation cannot be effectively suppressed. Therefore, measuring drug concentrations in patients who do not respond adequately could provide more information on why there is an inadequate response (48). Etanercept (ETN): Etanercept is believed to be less immunogenic than other anti-TNF agents (42). In patients with PsO, the frequency of detection of anti-ETN antibodies varies between 1.5% and 2.8%, although in open label extension studies of up to 96 weeks of follow-up, they were evidenced in up to 18.3% of the patients. Consistent with the results of previous clinical trials, these ADAs were shown to be non-neutralizing and to have no apparent effect on the efficacy of the drug or its safety profiles (49). Golimumab (GOL): Studies with GOL in the treatment of patients with RA, PsA, and ankylosing spondylitis report low ADA titers, without impact on clinical efficacy or adverse reactions at the injection site (50). Certolizumab (CTL): CTL pegol is a useful and safe option for the treatment of moderate to severe severity plaque PsO, and it provides an important treatment option for women of childbearing age, where the available options are limited (51,52). The reported incidence of ADA for CTL varies in the different studies between 5-37% depending, in large part, on the method used for its identification, with mixed results regarding the clinical response (no effect in patients with RA, but with reduced effectiveness in CD) (53). Although the proportion of patients with detectable anti-CTL antibodies may be high, the drug concentration is above the therapeutic range (4 20 mg/L), which is correlated with the ability to neutralize TNF (54). In phase III studies carried out in patients with plaque PsO treated with 200 mg or 400 mg of CTL, the presence of ADAs was demonstrated in 19.2% and 8.3%, respectively, on one or more occasions at the 48th week of follow-up. However, the presence of these factors did not appear to be associated with an increase in AE (52,55,56). Anti-IL-12 and 23: IL-12 and 23 participate in PsO pathogenesis by facilitating the inflammatory Th1 response. IL-12 is a heterodimeric cytokine composed of two subunits, p50 and p40. This latter subunit is also part of the IL-23 receptor, which is a common component of both ILs (57). Ustekinumab (UTK) and Briakinumab (BAK), two types of mcAB directed against the p40 subunit of IL-12/23 were developed and evaluated as therapeutic alternatives for PsO and other immunemediated diseases. UTK is the only IL-12/23p40 inhibitor approved by the Food and Drug Administration (FDA) for the treatment of moderate to severe plaque PsO and PsA. The clinical development of BAK was discontinued due to safety concerns reported in clinical trials, including cardiac events and malignancies (58). Ustekizumba (UTK): It is an IgG1/k type mcAB, humanized, with high affinity, directed against the p40 subunit of IL-12/IL-23. It mainly inhibits the Th17 lymphocyte signaling pathways, approved by the FDA for the treatment of moderate to severe PsO since September 2009 and PsA since September 2013 (59). Recently, Hanauer et al., in a long-term follow-up study (5 years) in patients with CD treated with subcutaneous UTK, demonstrated a low incidence of ADA formation (4.6%) at week 156, with maintenance of the clinical response and good tolerance (60). On the other hand, Leonardi et al. in a phase III, randomized, double-blind study, PHOENIX 1, of UTK controlled by placebo, in which 766 patients with moderate to severe PsO were recruited, showed that 38 of the 746 patients who completed the protocol and remained on the drug (5.1%) developed ADAs at low titers (o1/320) at week 76, which were not related to adverse reactions at the injection site (61,62). Anti-IL 17: IL-17 plays a fundamental role in the pathogenesis of PsO and PsA, and is part of a family of cytokines that includes six members (IL-17A, IL-17B, IL-17C, IL-17D, IL-17E, and IL-17F). IL-17A is considered the most important, since by interacting with its receptor (IL-17R), it produces chemoattraction of neutrophils, recruitment of T helper-17 lymphocytes and stimulation of macrophages, endothelial cells, and fibroblasts, perpetuating the inflammatory response (63). To date, three antagonists of the IL-17 pathway have been approved by the FDA for the treatment of PsO and PsA: secukinumab (SCK), ixekizumab (IXK), and brodalumab (BDL). They were supported by phase III clinical studies, demonstrating high efficacy, tolerability, and safety (29). Secukinumab (SCK): It is a fully human anti-IL-17A mAb that has demonstrated efficacy in the treatment of PsO in moderate to severe plaques. In seven double-blinded, randomized (DBR) phase III studies, statistically significant superiority of SCK was demonstrated from week 12 of treatment, when compared with placebo in clinical responses to PASI75/90/100, Investigator Global Assessment (IGA) 0/1, ACR20/50, and quality-of-life indices, such as the Dermatology Life Quality Index (DLQI). It is an effective and safe drug, with rapid and long-lasting clinical responses, across the spectrum of manifestations of PsO. Although the incidence rate of AE is low and comparable with that of other biological agents, a higher incidence of mucocutaneous infections by yeast of the Candida genus stands out. This is probably explained by the fact that IL-17A plays a key role in mucocutaneous microbial surveillance, stimulation of granulopoiesis, and neutrophil trafficking. SCK has shown low immunogenicity in vitro and in clinical trials. In phase III clinical studies, only 0.4% of patients (10/2842) developed ADAs, the majority non-neutralizing, without evidence of modification in the pharmacokinetics, safety, and efficacy of the drug, although the small number of patients limited the power of the study (64). Recently, Reich et al. evaluated the immunogenicity of SCK in a 5-year follow-up period. Of a total of 1821 patients, 1636 were subjected to analysis for emerging ADAs with treatment, of which only 32 patients developed anti-SCK antibodies, which determined an incidence of less than 1% of new ADAs per year. Neutralizing antibodies were detected in 9 of the 32 patients, half of whom were transient in duration. As an important conclusion, the researchers emphasized that no titer or type of antibodies affected the efficacy, safety, or pharmacokinetics of SCK (65). Ixekizumab (IXK): IXK is a humanized IgG4/k type mcAB, with high selectivity against IL-17A, approved since 2016 by the FDA and the European Medicines Agency (EMA) for the treatment of moderate to severe plaque PsO, and in 2017, the FDA also approved its indication in PsA (29). The therapeutic efficacy of IXK was demonstrated in DBR clinical trials, offering rapid and sustained disease control, achieving PASI75 and PASI90 response rates of approximately 90% and 70% of patients, respectively, at 12 weeks of treatment. In the long term, approximately 80.5% of patients maintained PASI75 after 3 years of follow-up. In head-to-head studies against ETN, UST, and GSK, IXK's superiority in achieving PASI100 was also demonstrated in up to 40% of cases at week 12. Similar response rates were observed in patients with initial scalp, nail, or palmoplantar involvement, with a good safety profile and prolonged use (66). Regarding anti-IXK antibodies, Blauvelt et al. in 2016 evaluated in a blind, randomized and controlled way, the presence of ADAs in patients treated with IXK during the induction (weeks 0-12) and maintenance (weeks 12-60) period. Treatment-induced serum ADA levels were divided into subgroups according to antibody titers (negative, low, moderate, and high). At 12 weeks, the vast majority of patients were negative for ADAs, 91.0% in those who received IXK every 2 weeks, and 86.6% in those with IXK every 4 weeks. Patients who developed anti-IXK antibodies at 12 weeks had low titers of 5.7% and 8.0%, moderate titers between 1.6% and 3.0%, and high titers in 1.7% and 2.4% of cases, depending on whether they received the drug every 2 or 4 weeks, respectively. When evaluating clinical efficacy during the induction period in patients who received IXK every 2 weeks, only those with high titers of ADAs had reduced responses in PASI75, compared to negative patients for ADAs, with an average drop of 53.5% of clinical response for IXK patients every 4 weeks, compared to 36.8% for IXE every 2 weeks. The important thing about this study was that at the end of the 60 weeks of followup, the clinical efficacy of IXK was similar among all groups of patients with ADA, regardless of the administration interval and without being associated with other AE (67). Recently, these results were corroborated by Reich et al. in a study with similar characteristics (68). 4% of the cases, and there was no development of neutralizing ADAs. Among ADA-positive patients, 60% achieved a score of 0 or 1 on the Static Physician's Global Assessment (sPGA) scale at week 12, in the group that received BDL 210 mg every 2 weeks, compared to 79.1% of patients who did not develop ADAs. All patients who experienced disease relapse, defined as sPGA4 3, were treated again with BDL 210 mg every 2 weeks (none of them positive for ADA), achieving an improvement of at least 75% in their PASI baseline. Although it is true, the authors emphasize that it is difficult to draw definitive conclusions regarding the effect of ADAs on the clinical response rate, and given the small number of patients with positive anti-BDL antibodies, the presence of these antibodies does not seem to be associated with the development of tolerance to the drug, as shown by the high percentage of patients with ADA that maintains efficacy at 52 weeks (70). Anti-IL-23: IL-23 is secreted by tissue-resident dendritic cells and macrophages and is a key cytokine involved in the protective immune response against fungal and bacterial infections. However, a decrease in its production, observed in PsO, activates the inflammatory cascade early, maintains the phenotype of Th17 lymphocytes, and is critical in the production of pro-inflammatory cytokines, such as IL-17A, IL-17F, and TNF. To date, four mcABs have been developed that selectively and highly specifically block the action of IL-23 (71)(72)(73). Guselkumab (GSK): GSK is a fully human IgG1/l mcAB that is directed against the p19 subunit of IL-23. It has been approved in Japan, since 2016, for the treatment of PsO vulgaris, PsA, pustular PsO, and erythrodermic psoriasis and by the FDA for the treatment of moderate to severe plaque PsO, since July 2017 (59). Two phase III studies comparing GSK with ADL in the treatment of moderate to severe PsO demonstrated the superiority of GSK in achieving improvements in PASI90 and IGA 0/1 at week 28 of followup, with persistence of the response in sustained therapy versus withdrawal of the drug, from weeks 28-48. Regarding anti-GSK antibodies, the VOYAGE 1 study reported the presence of ADAs in 5.3% of patients (26/492) at week 44, with generally low titers (81% with o 1:320), which were not associated with a reduction in the clinical efficacy of the drug or with reactions at the injection site (74). Similarly, in the VOYAGE 2 study, anti-GSK antibodies were detected in 57 of 869 patients (6.6%) at week 48, to generally low titers (88% with o1/160), which also did not affect the clinical response to treatment or the incidence of adverse reactions to injection (75). In a recently published letter to the editor, Zhu et al. presented the results from VOYAGE 1 and 2, a 100-week follow-up study of the same patients. Of the 1713 patients exposed to GSK, 8.5% developed ADAs transiently, with 76% of the cases having low (o 1:160), 11.6% medium (1:320), and 12.3% high titers (4 1:640), and only in 9 of 146 patients (6.2% of cases) were neutralizing antibodies. Regardless of the nature of the anti-GSK antibodies, no loss of clinical response could be demonstrated; therefore, no drug dose adjustments were necessary (76). Tildrakizumab (TDK): TDK is a humanized IgG1/k mcAB with high affinity against the p19 subunit of IL-23, which can be administered intravenously or subcutaneously (59). At doses of 100 and 200 mg administered in weeks 0 and 4 and then every 12 weeks, it has demonstrated efficacy and safety in the treatment of chronic plaque PsO of moderate to severe severity. Recently, TDK has been approved for use in the treatment of chronic plaque PsO by the FDA and EMA (77). In a prospective study with pooled data from phase III clinical trials (P05495, reSURFACE 1 and reSUR-FACE 2), in patients with chronic plaque PsO, treated with TDK, Kimball et al. evaluated both the development of treatment-emergent ADA, as well as neutralizing antibodies and the effects that these could have on the pharmacokinetics, efficacy and safety of the drug. In this integrated analysis, emerging ADAs were observed in approximately 4% of the 1,400 evaluable patients who received TDK for 12-16 weeks and in approximately 7% of the 780 patients who used the drug continuously for 52-64 weeks. Similarly, the incidence of neutralizing antibodies was 2-5% with 100 mg and 2-3% with 200 mg of the drug for the same periods analyzed. This subgroup experienced a moderate decrease in TDK pharmacokinetics, with a reduction in clinical response determined by a significant 10-15% drop in the mean PASI score relative to patients without ADAs at 52 weeks. The development of ADAs was not associated with an increase in severe AEs or discontinuation of treatment. Overall, the incidence of potential immunogenicity-related AEs did not show a clear trend in patients with inconclusive titers or in any category of ADA-positive patients, compared to patients without ADA, similar to other results with anti-IL-23/IL-17 biologics (78). Risankizumab (RSK): This biological agent is a humanized IgG1 mcAB that selectively binds to the p19 subunit of heterodimeric IL-23. In 2019, RSK received approval in Japan for use in treating adults with PsO vulgaris, PsA, generalized pustular PsO, and erythrodermic PsO, and in Canada, the United States, and Europe for patients with moderate to severe PsO. Among patients treated with RSK 150 mg for up to 52 weeks (n=1079), the presence of ADAs and neutralizing antibodies was detected in 24% (263) and 14% (150), respectively. In most cases, ADAs were not associated with changes in the clinical response or safety. High ADA titers in approximately 1% of RSK-treated patients were associated with a slight reduction in the clinical response. The incidence of drug injection site reactions was 3% in patients with ADAs versus 1% in those without ADA development at weeks 16 and 5 versus 3% from week 52 onwards (79). ' CONCLUSIONS Currently, a variety of biological agents are available for the long-term treatment of PsO and PsA. These drugs, mcAB or fusion proteins, have been developed rapidly in recent decades, managing to revolutionize the treatment of inflammatory pathologies with high systemic repercussions, with the ultimate goal of interfering with key pathways in the inflammation cascade with high efficacy and safety, thanks to a more complete understanding of the pathophysiology underlying these diseases. Being exogenous molecules, they are capable of activating the immune system and triggering a specific adaptive immune response, producing specific neutralizing antibodies directed against the antigen-binding site of therapeutic mcABs. Factors derived from the drug itself as well as the patient and even the treatment regimen, influence the development of this immunogenicity. It is currently considered that the presence of ADA could reduce therapeutic responses by up to 80%, particularly in patients who do not receive immunosuppressive drugs concomitantly. Of the biologics indicated for the treatment of PD, anti-TNF drugs, particularly IFX, have the highest rates of ADAs, associated with loss of clinical effectiveness and higher incidence of adverse reactions to drug infusion, in low dose treatments, intermittent administration, or non-association with MTX, among others. Reports regarding ADA in new biologics are still scarce, but the most recent evidence suggests little impact on the clinical response to the drug, even with prolonged treatment. It is therefore essential to standardize laboratory tests to determine the presence and titles of ADAs to establish their administration and management guidelines that allow the determination of the real clinical impact of these drugs. ' AUTHOR CONTRIBUTIONS Valenzuela F was responsible for the research design and conception. Flores R was responsible for manuscript writing. Valenzuela F and Flores R were responsible for the critical revision of the manuscript for important intellectual content.
2021-09-26T05:19:58.288Z
2021-09-20T00:00:00.000
{ "year": 2021, "sha1": "766ff1b9dc5d913236b2dd917f822d4a24f07464", "oa_license": "CCBY", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8449932/pdf/cln-76-3015.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "766ff1b9dc5d913236b2dd917f822d4a24f07464", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
36849508
pes2o/s2orc
v3-fos-license
IMPROVED COST MANAGEMENT AT SMALL AND MEDIUM SIZED ROAD TRANSPORT COMPANIES : CASE HUNGARY Small and medium sized road freight transport companies located in Hungary are facing strong competition on the logistics market. An advanced cost management system supporting decisions on capacity allocations or pricing may be a competitive advantage for them and indirectly for the whole economy as well. Still, they generally apply simple, traditional cost calculation regimes, potentially sufficient in case of a homogeneous service portfolio. Nevertheless, road haulage companies with heterogeneous service structures may witness information distortions when using traditional costing methods. So it might be recommended for them to introduce better costing principles. To support an improved transport costing, a multi-level full cost allocation model has been set up and tested in this paper. The research results have pointed out that such a methodological development accompanied by the extension of the data collection mechanism can contribute to making the cost management systems of road freight transport companies more effective. INTRODUCTION According to previous surveys road freight transport companies running business in Hungary apply generally ex post costing methods of a simple kind [1].This might be true in case of other road freight transport enterprises located in Central-Eastern-European (CEE) countries as well, since they have similar political and economic background as the Hungarian ones do.The cost management methods used are mainly based on average aggregate costs of transport services.These are regarded as traditional costing methods.Sometimes performance independent fixed and performance dependent variable cost components are separated, and thus the calculation becomes slightly differentiated.Nevertheless, the main costing principle, i.e. averaging aggregate costs, remains the same.As the output of the costing regimes, that is, service cost data constitute the basic input of decision-making and ex ante pricing procedures it is of high importance to make cost calculations more accurate. Transport costs are often regarded as one of the main influencing factors of a relevant firm's competitiveness.More effective transport services, i.e. cost reductions of haulage may increase the export share of companies involved in international trade [2].So more efficient transport operations controlled by adequate decision support systems are essential not only for the transport companies themselves, but for the firms utilizing their services as well. According to public and comparable data of EU-ROSTAT only less than 7% of the Hungarian road freight transport companies employed at least 10 employees in 2009.This value was almost the same in Slovenia and only about 3% in Poland.The ratio of enterprises having 50 or more employees was less than 1% in each country.These data seem to indicate that the majority of road haulage companies in Hungary and possibly in other CEE countries also belong to the category of small and medium sized enterprises (SME).This phenomenon is confirmed by [3] who highlight the coexistence of two very different production models in this industry: 'the model of large enterprises combining haulage, freight forwarding and logistics, more likely in north-western Europe, and an SME model, principally in road freight (…) particularly in southern and eastern Europe.'The authors attribute the latter to the low production costs derived from lower wages and social welfare charges than EU average.This might also be the explanation for flagging out mentioned by [4] and in the report of the respective high level group [5].Simple costing regimes may be sufficient for decision support and pricing purposes in case of micro scale road transport companies which generally provide ho-Z.Bokor, R. Markovits-Somogyi: Improved Cost Management at Small and Medium Sized Road Transport Companies: Case Hungary mogeneous services.Some small and even medium scale road transport companies may also have homogeneous operational and service structures which allow them to employ traditional cost management techniques. The remaining group of small and medium scale road transport companies disposes of more complex operational structures using multiple resources and managing inhomogeneous service systems.More complex organisations require generally more developed management accounting systems [6].These companies, however, do not make use of advanced, sophisticated costing schemes although the application of such calculation methods could deliver advantages to them: understanding the drivers of costs, differentiating costing and pricing regimes by taking into account the differences of various services and service generators, determining the real costs of elementary services and using this information for pricing activities.Thus the aim of this paper is to elaborate and test an improved cost calculation method applicable to road freight transport companies requiring more accurate costing information.The improved costing model: -shall be more detailed and sophisticated and go beyond the use of simple aggregate values; -provides benefits by differentiating the direct and indirect cost components, just as well as the fixed and variable cost components; -shall be governed by the cause-effect based allocation of indirect costs; -shall use differentiated performance indicators; -has to take into account the operational characteristics, i.e. the variety of resources or resource types of the examined company; -has to consider the special features of different transport services or service types.The methodology is derived from the relevant existing costing principles applied or at least piloted at transport or logistic companies.Nevertheless, these principles shall considerably be improved to meet the requirements set before.The costing methodology is then adapted to a typical road transport company having a generalised operational and service architecture.Having developed the calculation equations, a pilot application with real input data is carried out.The pilot calculation and the comparison of the possible calculation schemes make it possible to distinguish the advantages of the improved cost calculation model. LITERATURE REVIEW Transport companies, particularly small and medium sized ones, are seldom object of cost calculation case studies reported in the literature.Although there are several research results on the theoretical issues of costing principles available, concrete and full-scale applications assessing and evaluating the cost struc-ture of transport enterprises can hardly be found.The few existing case studies in this field use either activity-based costing (ABC) or multi-level full cost allocation (MFCA) for solving costing problems.Both of them are characterised by the relative performance consumption based allocation of indirect costs.While the former method uses activities, the latter one takes into account the organisational structures to support cost allocations. The most detailed and tested transport-related ABC model has been elaborated for the case of a road haulage company [7].A medium sized road haulage company operating 122 trucks has been analysed in the case study.Instead of single transport tasks, 28 transport service groups categorised according to the target countries have been identified as elementary profit objects.(Note that ABC applications may use the designation of "cost object" for profit objects.)A two-stage indirect cost allocation has been carried out using differentiated resource drivers and cost drivers and applying 19 cost pools and 17 activities.It turned out that there may be significant differences between the results of traditional and activity-based cost calculations so it is worth investing into the development of the costing regime.The costs of individual airplanes and flights have been analysed with ABC in an empirical study examining the operating costs in the airline industry [8].Four activity pools containing 10 activities have been used for allocating overheads.The overheads have been allocated to the airplanes first and then the costs of airplanes have been assigned to individual flights proportionally to their relative transport performance.An ABC system has been introduced in timber harvesting including road transport [9].Here transport operations can be regarded as internal secondary processes while one lot of timber from a specific assortment has been selected as elementary profit object.Seven activities for forest transport and eight activities for long-distance transport have been identified in the ABC model.The cost drivers of transport-related activities were time or distance based. In spite of the relatively limited transport-related research results several ABC studies can be found in the field of logistics.Most of them deal with internal or external logistics processes of production or manufacturing systems, i.e. distribution management [10], warehousing [11], storage systems [12] or entire material flows [13].Even supply chains have been evaluated by ABC owing to the standardised cost definitions and allocation procedures [14].Logistics service providers offering, among others, transport services have at the same time been less analysed.A basic ABC model of such companies using matrix algebra has been elaborated through a case study defining sample activities and cost drivers [15].Eight activity centres have been defined and the cost of a sample complex logistics service offered to a certain customer has been calcu-lated.In another example the theoretical cost structure of third-party logistics service providers has been evaluated by ABC with special regard to general activities of warehousing and transport [16].Nine activities have been found for warehousing and for transport.Most of transport-oriented cost drivers proposed are volume based and some of them are distance or time based. A later costing approach is the so called time-driven ABC [17].This approach identifies the processes consisting of activities, their costs and their effective capacities.The capacity of logistics processes can be expressed for example by the amount of total working or operation time.The cost per time unit is calculated by dividing the total cost by the capacity.Costs are then allocated to the profit object by multiplying the cost per time unit by the time needed to perform the corresponding activity.The time of performing an activity is estimated by detailed time equations containing so-called time drivers.Time equations describe the different characteristics of specific cases.As time drivers include various performance indicators, they make multiple cost driving also possible.Although a detailed time analysis is not foreseen in this study, the importance of time consumption is acknowledged in the modelling procedure.The outputs of the empirical ABC studies, especially the road haulage costing results can be used as a starting point for our analysis.Nevertheless, the intention is to analyse single freight transport tasks instead of service groups and the multi-level structure of cost centres is to be taken into account.Another task is the differentiation between fixed and variable cost items.Thus, it shall be assessed whether MFCA applications in transport or logistics support these additional methodological requirements. MFCA models have been set up and used for logistics processes integrated in production systems [18] and for road transport based logistics service providers [19].Relevant from these calculations is the one evaluating the costs, performances and profits of logistics service providers operating road freight transport services.The article gives guidance on the general methodology of MFCA and elaborates a generalised calculation model applicable by logistics service providers [19].The model has been illustrated by a full scale sample calculation where 13 cost centre types have been used for calculating the costs of 10 logistics service packages.Some of the input data, however, have been just estimated using empirical experience as no real data were available for the pilot calculation.A more detailed description of MFCA algorithms can be found in the methodology paper analysing the costing techniques in complex transport systems [20].A generalised costing model for rail freight transport operators has been developed with 14 cost centre types.The general equations have been then exploited for conducting a parametric sample cost calculation of elementary rail freight transport tasks. The results of the analysed MFCA applications are useful as they can help building up the multi-level costing model, identify the cost drivers and define the necessary equations.Nevertheless, small and medium sized road transport companies require specific, usually simpler models depicting their operational structures.Furthermore, none of the reported MFCA models has paid attention to fixed and variable costs so this problem is to be tackled by the improved algorithms.Last but not least, real-world input data are required to test MFCA models in practice. Note that MFCA is a full cost allocation method.It means that finally, all costs are allocated to the profit objects.By doing so, decision makers are able to see the full profitability of their products or services; that is why they often prefer full cost allocation to other costing techniques.At the same time, MFCA and other full cost allocation methods require cost drivers for all cost elements and types.Usually, the problem set up cannot be solved in an exact way so some simplifications need to be incorporated into the calculation models.Thus, the outputs of such models are generally not fully accurate, which is the main limitation of the chosen methodology. METHODOLOGY After having reviewed the corresponding transport and logistics costing approaches the MFCA model fulfilling most of the prescribed requirements seems to be a suitable business management tool which is worth being implemented in road freight transport.As mentioned, the costing algorithms shall be revised since fixed and variable cost items are to be differentiated in the calculation process.Fixed costs are not dependent on performances so only variable costs can be included into the allocation procedure governed by the relative performance consumption.When using the MFCA principle indirect costs are first recorded in the cost centres.These are the so called primary costs of the cost centres.The primary cost of a cost centre can be determined on the basis of the resources assigned to it.The secondary cost of a cost centre consists of the allocated items coming from the serving cost centres, where appropriate.Cost allocations are carried out according to the relative performance consumption.So each cost centre with cost items to be allocated shall be provided with an indicator measuring its performance and the consumption of this performance.These indicators serve as cost drivers during the cost allocation. Cost centres can be organisational units and pieces of equipment, etc. representing resources consumed by multiple objects.They are arranged into a multi-level hierarchy according to the operational structure of the company.Cost centres can serve oth-Z.Bokor, R. Markovits-Somogyi: Improved Cost Management at Small and Medium Sized Road Transport Companies: Case Hungary er cost centres or contribute to the production of elementary or end transport services.Cost centres are indexed as k = 1…n.When they play the role as service cost centre the indexing is i = 1…n.Note that cost centres may also be interrelated in several ways.The model assumes that there is a one-way performance flow between the cost centres and no feedbacks are allowed.Of course this is a simplification and it may reduce the accuracy of the calculation.Nevertheless, this simplification makes it possible to avoid iterative approaches which would also reduce accuracy.Profit objects are the elementary transport services gaining revenues and bearing costs.Direct costs are assigned to the profit objects while indirect costs are allocated using the cost centres and their hierarchy.The allocation is governed like before: cost items are allocated proportionally to the performance consumption.Profit objects are indexed as j = 1…m. The allocation of indirect costs goes from the highest level to the lower levels of calculation hierarchy.The calculation is finished as soon as all indirect costs have been allocated to the profit objects.When introducing the differentiation between fixed and variable costs the original calculation equations [18,19,20] shall be modified.The new equations have been elaborated by the authors on the basis of original equations. Here, the performance independent fixed indirect cost items are not included into the multi-level indirect cost allocation.They shall be collected and aggregated separately and are assigned to the profit objects at the end of the calculation.So the cost of a cost centre can be divided into fixed and variable parts.Fixed costs in cost centres can be regarded as assigned primary costs as fixed cost items are not allocated in the multi-level model.At the same time variable costs can be divided into assigned primary and allocated secondary parts: where: The variable secondary cost is the sum of allocated variable cost items coming from the serving cost centres on the basis of relative performance consumption.So the cost of a cost centre can be calculated as follows: where: C -variable cost of service cost centre i; P i -performance of service cost centre i; k i -performance consumption of cost centre k at service cost centre i.The cost of a profit object can be divided into direct and indirect parts.The classification of direct and indirect costs depends on the applied accounting or data collection rules.Direct costs do not need to be split up into fixed and variable parts as no additional allocations are necessary to determine them.Anyway, direct costs are generally variable costs in road transport practice.Indirect costs of profit objects, however, shall be further divided into fixed and variable parts as the calculation of the two types of indirect costs differs: where: The variable indirect cost is the sum of allocated variable cost items coming from the serving cost centres on the basis of relative performance consumption.The fixed indirect costs of profit objects can be determined in different ways: 1) using the accounting-based approach where the aggregated sum of fixed costs collected in cost centres is distributed among the profit objects proportionally to their direct costs; 2) applying the time-based approach where the aggregated sum of fixed costs collected in cost centres is distributed among the profit objects on the basis of their relative service time.The latter solution may be regarded to be more reasonable since fixed indirect costs as overheads can be connected to time rather than to direct costs.So the cost of a profit object can be calculated as follows: 1 1 where: P ji -performance consumption of profit object j at service cost centre i; T j -duration or transport service time of profit object j.When using the traditional costing approach the average fixed cost values and average variable cost values are elaborated at company level.The aggregated fixed cost of the company is averaged by time while the aggregated variable cost is averaged by transport performance.Having the generalised average cost values, the cost of a profit object can be calculated through multiplying these values by the time consumption and by the transport performance respectively.Dedicated costs are directly assigned to the profit object, i.e. to a transport service.These are direct costs but may not cover the full set of direct costs specified in equation ( 4).This depends on the accounting system used.So the cost of a profit object can be calculated as follows: where: C f -fixed cost of the company; C v -variable cost of the company; D j -transport performance of profit object j which is the distance performed by the transport service; C -dedicated cost of profit object j. Analysing equation ( 5) we can conclude that the fixed cost of the company is distributed among profit objects on the basis of their relative time consumption.This is the same solution as used in equation (4).At the same time, the variable cost of the company is distributed among profit objects on the basis of their relative transport performance instead of using differentiated cost drivers as applied in equation ( 4).However, it should be noted that many road transport companies use specific cost indicators where the total cost of the company is averaged by transport performance only.It means that even the relatively simple equation (5) may sometimes be ignored when determining the costs and prices of certain road transport services. Transport companies often face a high ratio of fixed costs.The set of fixed costs is, however, not homoge-neous [21].There may be some resources behind the fixed costs whose capacities can be adjusted to performance changes for a mid-term period.The costs caused by such resources can be regarded as semifixed costs.At the same time the capacities of remaining resources cannot be changed in line with short or mid-term performance fluctuations.The costs induced by such resources can be regarded as real fixed costs.In this paper a cost calculation model with a mid-term time horizon is used.So, when applying detailed allocations in the cost calculation process it is reasonable to differentiate the fixed and semi-fixed cost items where appropriate.If semi-fixed cost items can be separated in the set of fixed costs and suitable cost drivers are available, such cost items may be regarded as variable costs and can be included into the allocation procedure.This may provide more accurate service cost data as the use of semi fixed costs enhances the ratio of cause-effect based cost allocations and at the same time decreases the number of sometimes hardly explainable fixed cost allocations. CALCULATION MODEL The methodology proposed can be regarded as an improvement of the traditional costing practice as it overcomes the problems of simple, arbitrary cost allocations.Before applying this methodology to particular enterprises it is worth conducting a deductive parametric calculation.This can then be the starting point of real-world applications by providing the necessary theoretical framework.It shall be noted however, that the theoretical proof is to be adapted to the operational circumstances of the examined company as each The calculation model is based on the operation model of Figure 1 consisting of the profit objects and the cost centres; furthermore, the identified performance relationships between them, rely on the empirical experience from the CEE countries and on former research results [19]. Elementary freight transport tasks or services are defined as profit objects while a determined set of cost centres depicts the operational technology.Cost centres are explained further in the text.The vehicles, vehicle drivers and the transport management unit performing various operative tasks like customer care or forwarding take part mainly in the production of the basic transport services.The maintenance unit, if available, serves the vehicles by ensuring the required state of repair.Dedicated maintenance units are worth operating in case of a relatively big vehicle fleet only.Otherwise, maintenance tasks are normally outsourced.Technology management is responsible for controlling vehicle maintenance and carrying out capacity allocations, i.e. the assignments of vehicles and vehicle drivers.Finally, the unit of central management and background services covers all other administrative functions required for managing and operating the organisation of the company.Note that the basic model presented above concentrates on the main activity field only, i.e. on full truck load road freight transport.If the company offers additional services like groupage transportation or value-added logistics services, the calculation scheme is to be extended by these ele-ments.As the model is flexible it can easily be adapted to the operational and service structure of a particular company. The next task is to analyse the cost structure, i.e. the primary costs of cost centres and the direct costs of profit objects.The primary costs of cost centres shall be divided into fixed, semi-fixed and variable parts.For the differentiation between these cost categories their general definitions have been used.Nevertheless, it is sometimes not easy to decide whether a certain cost type is fixed or semi-fixed.Sound managerial experience can help overcome this problem.Where cost allocations are foreseen, i.e. in case of cost centres having variable or at least semi-fixed cost items, cost drivers are also to be determined.Typical direct costs can be dedicated costs, e.g.tolls or other infrastructure user charges, etc. connected to the transport service.Fuel costs may also be direct costs.If these cost elements cannot be regarded as direct costs then they are normally assigned to vehicles as variable costs.The results of the structural analysis of costs are summarised in Table 1.The calculation objects are linked to Figure 1.Note that the cost items defined here are the most typical ones and may vary from company to company.Furthermore, the duration of a transport service may exceed the total working time of drivers involved in this task, as it may contain unproductive operations. If semi-fixed cost items are considered as fixed costs, then, vehicle x (x = 1…X) and driver y (y = 1…Y) take part in the production of the profit object, the cost of transport service j can be calculated, by exploiting and extracting the general equations ( 1) -( 4) and considering the operational structure of Figure 1, as follows: For the definitions and abbreviations of the variables see Table 1.If the semi-fixed cost items are considered as variable costs equation ( 6) shall be modified as follows: The scheme described by equations ( 6) or ( 7) can serve as the improved cost calculation tool of elementary road freight transport services.It is namely expected that the service cost data produced by the application of these equations are in general more accurate than the ones derived from traditional costing procedures represented by equation ( 5).This can be explained by the transparent allocation of differentiated indirect costs.Thanks to the consequent use of cause-effect based allocations and assignments the accuracy of transport costing can be significantly increased.However, the applicability of the sophisticated procedures and equations may be influenced by the availability and the quality of requested input data.Thus actual data collection and processing practices may lead to modifications regarding the applicability of the theoretical parametric equations.Similarly, if the operational mechanism of the examined company differs from the one presented in Figure 1 the basic calculation algorithms may also need adaptation.Nevertheless, the principles of the calculation remain the same even if the equations are slightly altered. CASE STUDY To demonstrate the advantages of the improved costing procedure sample calculations have been performed based on real data and relying on the algorithms developed.The input data have been provided by a Hungarian road freight transport company operating 20 vehicles and employing 22 drivers at the end of the reference year.Two employees are responsible for technology management while transport management is carried out by five persons.The vehicles are owned by the company.Maintenance is fully outsourced so the maintenance costs are part of the material costs.The drivers are paid on the wage basis only.The au-thors confirm that they have permission to anonymously use the data provided by the company. The company has performed 770 freight transport services in the reference year.Its market covers whole Europe, and domestic as well as international services are offered.The service structure can be regarded as inhomogeneous as the company's transport tasks depend on several factors like geography, types of goods, complexity of forwarding process, etc.Thus, the introduction of the improved cost calculation seemed to be reasonable.Cost calculations have been carried out for several profit objects.Sample calculations have contributed to refining the model as well as to identifying the gaps between the current and the desired data collection practice.To demonstrate the procedure, the evaluation of five selected elementary transport services performed by two vehicles and two drivers is presented in the following phases: a) based on input data of the original data collection; b) based on input data of improved data collection. Both phases make use of the following calculation methods: 1) traditional cost calculation using equation ( 5), without differentiating fixed and semi-fixed costs; 2) traditional cost calculation using equation ( 5), differentiating fixed and semi-fixed costs where semi-fixed costs are regarded as variable costs; 3) improved cost calculation using equation (6) where semi-fixed costs are regarded as fixed costs; 4) improved cost calculation using equation (7) where semi-fixed costs are regarded as variable costs.The input data provided by the original data collection mechanisms are not detailed enough to support the full-scale application of the improved costing equations.The current management accounting system neglects most of the proposed performance indicators, i.e. cost drivers, on the one hand and uses only central management and vehicles as cost centres on the other hand.Furthermore, there are no costs directly assigned to the services.The input data for the cost calculation of the selected transport services derived from the current data collecting system are summarised in Table 2 (for the abbreviations see Table 1). The results of the cost calculation based on the original data collection system are shown in Table 3.Note that equations ( 6) and ( 7) could be used in a limited, i.e. simplified way only due to the fact that the detailed input data are mostly missing.The differences between the values calculated by the traditional and by the corresponding improved equations, i.e. equation ( 5) vs. equation (6) and equation (5) differentiated vs. equation (7), are within 2%, so that there are no relevant differences between the results.Thus, it can be concluded that it is not worth applying the improved costing method here without having the necessary input data of the requested quality and format.The latter means that the input data are not detailed enough or contain faulty records. Z. Bokor, R. Markovits-Somogyi: Improved Cost Management at Small and Medium Sized Road Transport Companies: Case Hungary The improvement of the cost calculation system involved the introduction of new cost centres, cost drivers and direct cost categories according to the calculation model presented in Figure 1 and described in Table 1.The data collecting system was also refined according to the requirements of the new calculation structure, i.e. more detailed and more precise input data were sought.Of course, the specific operational features of the examined company have been taken into account, too. As most of the additional financial and technical data could be exploited from the existing information systems or records with only minor transformations and estimations, no considerable effort was necessary to build up the extended input database (see Table 4, for the abbreviations see Table 1).The transformations were carried out with the help of additional tables while the estimations were obtained from brainstorming.All these efforts required some man-days only.The results of the cost calculation based on the improved data collection system are shown in Table 5.As it can be seen, these values differ from the ones presented in Table 3.The differences between the cost values produced by the same equations but based on dissimilar, i.e. conventional or extended, input databases range between 1% and 14%. The differences between the outcomes of the traditional and the corresponding improved calculation schemes are significant when using an extended input database.The deviations can be 10-15% or even more.The more irregular the transport service, the higher is the variance of its calculated cost and the higher may be the risk that the real cost remains hidden as result of the simple averaging practices.To give an example, Figure 2 compares the various calculation results of transport service 3.As it can be seen, this is an irregular transport service, and the outcome depends on the calculation equations applied: the calculated Summarising the outcomes of the case study one can conclude that the variance of transport service cost values may be rather high: the values depend on the costing method applied as well as on the input database used.Considering this variability it is of high importance that a well established and transparent methodology is used for cost calculations.The traditional method, i.e. equation ( 5), uses few cost drivers and aggregated cost items, which may lead to distorted costing information and to disregarding service characteristics in case of inhomogeneous transport tasks.Although equation (6) operates with differentiated cost items and cost drivers, it should not be utilized or it may even result in distortions when fixed costs or costs regarded as fixed ones dominate the company's cost structure.Considering the possible distortions of using equation ( 6) it is advisable for transport companies to use equation (7) and so benefit from the advantages of cost and cost driver differentiation as far as the data collecting system supports it.The more heterogeneous the service structure, the more advantages can be expected from the development of the cost and performance management system.When no developments in the data collecting regime can be executed or the heterogeneity of the service structure is low, equation ( 5) with a differentiation of fixed and semi-fixed costs may be a sufficient second best solution. CONCLUSIONS The methodological improvement of transport cost calculation enables a more effective cost management of small and medium sized road haulage companies.Service costs become more accurate through the application of the developed equations as the ratio of directly assignable costs increases and the allocation of remaining indirect costs is built on a transparent, cause-effect basis.To be able to utilize their advantages the new costing procedures shall be accompanied by the extension of data collection techniques as well. At the same time the more sophisticated costing system requests high quality input data with special regard to differentiated cost components and performance indicators as cost drivers.Additionally, cost records shall be supplemented by performance records with regular updates.Finally, some data transformations may also be needed for the improved calculations.Of course, all these efforts may cause some additional expenses for the company, while general experience shows that the data necessary for the improved method are usually available within the company.Thus, if the enterprise has an extensive and inhomogeneous service system its decision support regime can be made more effective through the proposed method at a reasonable price.This more effective costing and pricing scheme may be a competitive advantage in the Central-East-European freight transport market where service supply usually exceeds the demand.Obviously, the average Central European road freight transport company with a human labour force of less than 10 employees cannot be expected to read scientific journals and thus apply the latest accounting methodologies, such as the one described in this article.Hence, it is of utmost importance to place adequate emphasis on disseminating the information given above, and to do so in a manner and layout appropriate to the target audience. SMEs in road freight transport may be reached by different national stakeholders or advocacy organizations, like the Hungarian Road Freight Association, the Hungarian Logistics Association or the Hungarian Association of Logistics, Purchasing and Inventory Management in Hungary and other national organizations in the peer countries.International organisations, like the European Association for Forwarding, Transport, Logistics and Customs Services (CLECAT), the European Logistics Association (ELA) or the International Road Transport Union (IRU), etc. may also contribute to disseminating the best practices.These may rely on organizing dissemination events, which can serve as a platform for providing basic training in the improved costing method.At the same time, similarly to the endeavours of [22] a web-based tool could also be developed, which would enhance the uptake of the novel methodology, especially if the ICT-related gap between smaller haulage operators and larger logistics companies, as mentioned by [23] can be reduced.The advantages of improving the costing system are generally proportionate with the complexity of the operation, just as in the case of a public transport system [24]: the more sophisticated the operational structure of the road transport company, the more benefit is likely to be gained from the improvement of the cost man-agement system.As the developed MFCA-based method is flexible the calculation scheme can be adapted to different kinds of road transport companies through the consequent use of basic algorithms and principles. At the same time it shall be noted that the developed cost calculation model is not able to support all kinds of management decisions.It can be mainly used for planning or evaluating short and mid-term business operations in road freight transport of small and medium scale.Furthermore, the calculation equations ( 6) and ( 7) rely on a relatively simple operation model described by Figure 1.If more attention is paid to the operation model, i.e. more sophisticated operational structures are drafted, the calculation equations can also be refined and thus more complex decisions can be supported in the future. Figure 1 - Figure 1 -Operation model of a typical small and medium sized road transport company Figure 2 - Figure 2 -Calculated cost values of transport service 3 in EUR Table 1 - Cost structure of cost centres and profit objects Table 2 - Input data from the original data collection mechanism Table 3 - Cost calculation results in EUR, based on the original data collection mechanism Table 4 - Input data from the improved data collection mechanism Table 5 - Cost calculation results in EUR, based on improved data collection mechanismImproved Cost Management at Small and Medium Sized Road Transport Companies: Case Hungary values vary significantly.As it matters what equation one applies, the application of the improved, more transparent equations is worth considering.It can also be seen that the calculation results are influenced by the data collection regime as well, so the introduction of the improved equations shall be accompanied by the enhancement of the data collection system.
2017-05-03T12:15:35.088Z
2015-10-28T00:00:00.000
{ "year": 2015, "sha1": "e26c1d35480d6368443c87df2671d1825b3318ed", "oa_license": "CCBY", "oa_url": "https://traffic.fpz.hr/index.php/PROMTT/article/download/1719/1364", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "e26c1d35480d6368443c87df2671d1825b3318ed", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Economics" ] }
232172393
pes2o/s2orc
v3-fos-license
Low Doses of Glyphosate/Roundup Alter Blood–Testis Barrier Integrity in Juvenile Rats It has been postulated that glyphosate (G) or its commercial formulation Roundup (R) might lead to male fertility impairment. In this study, we investigated the possible effects of G or R treatment of juvenile male rats on blood-testis barrier function and on adult male sperm production. Pups were randomly assigned to the following groups: control group (C), receiving water; G2 and G50 groups, receiving 2 and 50 mg/kg/day G respectively; and R2 and R50 groups receiving 2 and 50 mg/kg/day R respectively. Treatments were performed orally from postnatal day (PND) 14 to 30, period of life that is essential to complete a functional blood-testis barrier. Evaluation was done on PND 31. No differences in body and testis weight were observed between groups. Testis histological analysis showed disorganized seminiferous epithelium, with apparent low cellular adhesion in treated animals. Blood-testis barrier permeability to a biotin tracer was examined. A significant increase in permeable tubules was observed in treated groups. To evaluate possible mechanisms that could explain the effects on blood-testis barrier permeability, intratesticular testosterone levels, androgen receptor expression, thiobarbituric acid reactive substances (TBARS) and the expression of intercellular junction proteins (claudin11, occludin, ZO-1, connexin43, 46, and 50 which are components of the blood-testis barrier) were examined. No modifications in the above-mentioned parameters were detected. To evaluate whether juvenile exposure to G and R could have consequences during adulthood, a set of animals of the R50 group was allowed to grow up until PND 90. Histological analysis showed that control and R50 groups had normal cellular associations and complete spermatogenesis. Also, blood-testis barrier function was recovered and testicular weight, daily sperm production, and epididymal sperm motility and morphology did not seem to be modified by juvenile treatment. In conclusion, the results presented herein show that continuous exposure to low doses of G or R alters blood-testis barrier permeability in juvenile rats. However, considering that adult animals treated during the juvenile stage showed no differences in daily sperm production compared with control animals, it is feasible to think that blood-testis barrier impairment is a reversible phenomenon. More studies are needed to determine possible damage in the reproductive function of human juvenile populations exposed to low doses of G or R. It has been postulated that glyphosate (G) or its commercial formulation Roundup (R) might lead to male fertility impairment. In this study, we investigated the possible effects of G or R treatment of juvenile male rats on blood-testis barrier function and on adult male sperm production. Pups were randomly assigned to the following groups: control group (C), receiving water; G2 and G50 groups, receiving 2 and 50 mg/kg/day G respectively; and R2 and R50 groups receiving 2 and 50 mg/kg/day R respectively. Treatments were performed orally from postnatal day (PND) 14 to 30, period of life that is essential to complete a functional blood-testis barrier. Evaluation was done on PND 31. No differences in body and testis weight were observed between groups. Testis histological analysis showed disorganized seminiferous epithelium, with apparent low cellular adhesion in treated animals. Blood-testis barrier permeability to a biotin tracer was examined. A significant increase in permeable tubules was observed in treated groups. To evaluate possible mechanisms that could explain the effects on blood-testis barrier permeability, intratesticular testosterone levels, androgen receptor expression, thiobarbituric acid reactive substances (TBARS) and the expression of intercellular junction proteins (claudin11, occludin, ZO-1, connexin43, 46, and 50 which are components of the blood-testis barrier) were examined. No modifications in the above-mentioned parameters were detected. To evaluate whether juvenile exposure to G and R could have consequences during adulthood, a set of animals of the R50 group was allowed to grow up until PND 90. Histological analysis showed that control and R50 groups had normal cellular associations and complete spermatogenesis. Also, blood-testis barrier function was recovered and testicular weight, daily sperm production, and epididymal sperm motility and morphology did not seem to be modified by juvenile treatment. In conclusion, the results presented herein show that continuous exposure to low doses of G or R alters blood-testis barrier permeability in juvenile rats. However, considering that adult INTRODUCTION Glyphosate (G)-based herbicides are important tools used worldwide in agriculture, forestry, and weed control (1). In addition, their use have spread because of the development of transgenic plants that tolerate high concentrations of these compounds (2,3). Roundup (R) is the most widely used formulation. It comprises mixtures of G and adjuvants, such as polyoxyethylene tallowamine (POEA), to enhance the uptake and translocation of the active ingredient into plant cells (4). G prevents plant development by inhibiting the enzyme enolpyruvylshikimate phosphate synthase (EPSPS) and interfering with the synthesis of essential aromatic amino acids (5). As this enzyme is not expressed by any member of the animal kingdom, the actions of G were supposed to be present exclusively in plants (6,7). However, unexpected effects in the animal kingdom have been observed. Particularly, G might act as an endocrine disruptor which could lead to male fertility impairment. Compelling evidence provide the existence of adverse effects of treatment with G or R on male reproduction (8)(9)(10)(11)(12). However, most of them were performed using doses far above the maximum dietary and environmental exposure levels reported in humans. For this reason, whether G is harmful to male reproductive health when exposure occurs at low doses and at early life stage is still under debate. Sertoli cells provide structural and nutritional support to germ cells. An important and unique physiological function of Sertoli cells is to contribute to the maintenance of a microenvironment suitable for the development of spermatogenesis through the establishment of the blood-testis barrier (BTB) (13,14). In the rat, BTB begins to be assembled around 15 to 20 days of age when Sertoli cell proliferation ceases. However, a fully functioning BTB is not established until postnatal day (PND) 25 to 30 (15,16). This period is characterized by the first wave of spermatogenesis up to round spermatids, the presence of numerous large pachytene spermatocytes and the formation of secondary spermatocytes. It is worth mentioning that the first tubules with step 19 spermatids appears on PND 45 and the first spermatozoa in the epididymis appears on PND 52 (17). The presence of an appropriately assembled BTB is essential to maintain spermatogenesis, a process that in the rat takes place during four complete cycles of the seminiferous epithelium and lasts from 49 to 52 days (18). Remarkably, dysfunction of the BTB has been considered an important mechanism involved in xenobiotic-induced reproductive toxicity (19). Recently, in studies using 20-day old rat Sertoli cell cultures, we have shown that G and R treatments alter the Sertoli cell junction barrier and postulated that BTB integrity is a sensitive target for the adverse effects of G or R on male reproductive function (20). Nevertheless, no studies have addressed a possible disruption of BTB by G and/or R treatments in vivo and its possible role in reproductive toxicity. In this study we investigated the possible effects of low doses of G and/or R treatment of juvenile rats on BTB function and on adult male sperm production. Animals All the procedures used in this study were approved by the ComitéInstitucional de Cuidado y Uso de Animales de Laboratorio (CICUAL) from the Hospital de Niños Ricardo Gutieŕrez (Res #2018-002) and performed in accordance with the principles and procedures outlined in the Guide for the Care and Use of Laboratory Animals issued by the National Institute of Health of USA. 3-month-old pregnant Sprague Dawley rats (250-300 g) were purchased from the bioterium of the Facultad de Ciencias Veterinarias, Universidad de Buenos Aires (FCV-UBA). In that facility, nulliparous female rats were mated with a stud male. The animals were separated after confirmed copulation either by detection of vaginal plug or spermatozoa in vaginal smear samples. Five days prior to pup delivery, rats were transported to our bioterium in transportation crates and were housed singly in free exchange cages (30 cm x 40 cm x 23 cm) with stainless steel tray containing pine wood shavings as bedding. Animals were maintained under controlled conditions of temperature (20 ± 2°C), relative humidity and lighting (12 h light-12 h dark cycle) with free access to water and pellet laboratory chow (Rat-Mouse Diet, Asociacioń de Cooperativas Argentinas, Buenos Aires, Argentina). Experimental Design At delivery, pups were sexed according to the anogenital distance. Litters were adjusted to 8 pups, prioritizing a maximum of 8 male pups per litter when possible. Male pups were randomly assigned to one of the following treatment groups: control group (C), receiving water; G2 and G50 groups, receiving 2 and 50 mg/kg/day G, respectively; and R2 and R50 groups receiving 2 and 50 mg/kg/day R, respectively. G was provided by Sigma-Aldrich (St Louis, USA). R formulation was a liquid water-soluble formulation containing 66.2% of G potassium salt, as its active ingredient. Treatments were given orally from PND 14 to 30. Pups were weaned on PND 21 and euthanized on PND 31. A set of animals were treated from PND 14 to 30 with water or 50 mg/kg/day R and kept without further treatment until PND 90. Tissues and blood were collected from euthanized animals in PND 31 or PND 90 ( Figure 1). Rats were treated from PND 14 to 30 to comprise the full period of maturation of the BTB. This exposure period corresponds to Sertoli cell maturation associated with BTB formation, germ cell meiosis, and the appearance of the first spermatids. The dose of 2 mg/kg/day was selected because it is in order of magnitude of the reference dose (RfD) of 1 mg/kg/day recently reassigned for glyphosate by the US EPA (2017). It is worth mentioning that this dose is used in several reports to analyze reproductive toxicity of the herbicide (21)(22)(23). The dose of 50 mg/kg/day was selected based on the no observed adverse effect level (NOAEL) for G (24) and on a previous report from Romano et al (10) who proposed this dose of R as appropriate for future toxicological analyses. No alterations in maternal care and nursing among the experimental groups were detected. Treatments caused no overt signs of toxicity. Collection of Blood and Tissues Blood was obtained by intracardiac puncture. The samples were allowed to clot at room temperature for 15 min, and then they were centrifuged at 950g for 5 min in order to obtain the serum. Supernatants were immediately frozen at −20°C for subsequent analysis. Biochemical parameters were determined with an automated Cobas c 501 analyzer (Roche Diagnostics, Mannheim, Germany). At PND 31, animals were euthanized by CO 2 asphyxiation, and testes were dissected, weighed, and used for histological analysis and BTB assay. Also, testes were dissected and snap frozen for reverse transcription quantitative real-time polymerase chain reaction (RT-qPCR) and Thiobarbituric Acid Reactive Substances (TBARS) determination. Tissue samples (liver, kidney, stomach, and intestine) were also collected at that time. At PND 90, animals were anesthetized with a mixture of ketamine-xylazine and tissues were sampled and weighed. Testes were used for histological analysis and BTB assay and to determine daily sperm production (DSP). Epididymides were used to evaluate sperm parameters. Histological Analysis For histological analysis, testes were removed and fixed in Bouin solution, and the other tissue samples were fixed in 10% formalin. Dehydration was carried out at room temperature using ascending concentrations of ethanol and shifting to xylene. After clearing, tissues were embedded in paraffin wax and 3-to 5-mm-thick sections were cut using a microtome (Thermo Fisher Scientific, UK). Sections were transferred to albumenized slides that were preheated to 37°C. Tissues were rehydrated in descending concentrations of ethanol, stained with hematoxylin/eosin and covered with a coverslip. The prepared slides were observed under an Eclipse 50i microscope (Nikon Instruments Inc., Tokyo, Japan) equipped with a digital camera (Canon, Japan). For TUNEL assay, testicular sections were revealed with In Situ Cell Death Kit (Roche Applied Science, Indiana, USA) as previously described (25). Intratesticular and Serum Testosterone Determination Testosterone was extracted from PND 31 testis homogenates with diethyl ether followed by evaporation of the organic phase and reconstitution of extracted testosterone in 0.1% PBS. Serum testosterone of PND 90 animals were also evaluated. Testosterone concentration was measured with an electrochemiluminescence assay (Roche Diagnostics, Mannheim, Germany) using a Cobas e 411 analyzer according to the manufacturer's instructions. Testosterone assay sensitivity was 10 ng/dl, and intra and interassay CV were 2.4% and 2.6%, respectively. Thiobarbituric Acid Reactive Substances Assay Lipid oxidation was determined by the colorimetric assay of TBARS (26). Testis homogenates were performed in PBS containing 0.4% w/v butylated hydroxytoluene on ice and then disrupted by ultrasonic irradiation. An aliquot (25 µl, corresponding to 450 µg protein) was added to 175 µl mixed reaction solution (0.15% w/v SDS, 0.5 N HCl, 0.75% w/v phosphotungstic acid, and 0.175% w/v 2-thiobarbituric acid). The mixture was heated in a boiling water bath for 45 min. TBARS were extracted with 200 µl of n-butanol. After a centrifugation at 10,000g for 5 min at 4°C, the absorbance at 532 nm of the butanolic phase was measured. A calibration curve was performed using malondialdehyde (MDA), generated from 1,1,3,3-tetramethoxypropane (0.4-8 µM), as standard to express the absorbance changes as nmol MDA/µg protein. Blood-Testis Barrier Integrity Assay The permeability of the BTB was assessed with a biotin tracer as described previously by Perez et al (27). Immediately after testes isolation, a solution of 10 mg/ml EZ-Link Sulfo-NHS-LC-Biotin (Pierce) dissolved in PBS containing 1 mM CaCl 2 was injected into the testis. The administered volume represented 10% of testis weight. Testes were then incubated at 34°C for 30 min, immersed in 4% paraformaldehyde and embedded in paraffin. For localization of the biotin tracer, testis sections (5 µm thick) obtained from different levels were deparaffinized and hydrated. To avoid nonspecific staining, sections were blocked with 5% nonfat dry milk in PBS containing 0.01% Triton X-100 for 15 min prior to incubation with streptavidin-rhodamine (1:300; Invitrogen, USA) for 45 min at room temperature. After nuclear staining with DAPI, sections were mounted in buffered glycerin and observed by Ahiophot fluorescent microscope with epi-illumination. At least 50 seminiferous tubules from 3 nonconsecutive testis sections from each rat were examined. Results are expressed as % of permeable tubules. RT-qPCR Analysis Total RNA was isolated from testis homogenates with TRI Reagent (Sigma-Aldrich). The amount of RNA was estimated by spectrophotometry at 260 nm. RT was performed on 2 µg RNA at 42°C for 50 min with a mixture containing 200 U MMLV reverse transcriptase enzyme, 125 ng random primers and 0.5 mM dNTP Mix (Invitrogen, Argentina). qPCR was performed by a Step One Real Time PCR System (Applied Biosystems, Warrington, UK). The specific primers for RT-qPCR are shown in Table 1. Amplification was carried out as recommended by the manufacturer: 25 µl reaction mixture containing 12.5 µl of SYBR Green PCR Master mix (Applied Biosystems), the appropriate primer concentration and 1 µl of cDNA. The relative cDNA concentrations were established by a standard curve using sequential dilutions of a cDNA sample. The amplification program included the initial denaturation step at 95°C for 10 min, 40 cycles of denaturation at 95°C for 15 s, and annealing and extension at 60°C for 1 min. Fluorescence was measured at the end of each extension step. After amplification, melting curves were acquired and used to determine the specificity of PCR products. The relative standard curve method was used to calculate relative gene expression. Relative mRNA levels were normalized to the reference genes HPRT1 and bactin. Determination of Daily Sperm Production At PND 90, testes were collected and processed following the experimental procedure described in Fernandes et al (28). The tunica albuginea was removed from one testis and the parenchyma was homogenized in 0.9% w/v NaCl containing 0.5% w/v Triton X100. Then, the homogenates were ultrasonic disrupted for 30 s. Samples were diluted at 1:10 and transferred to a Neubauer chamber and counted in quadruplicate. Elongated spermatid nuclei with a shape characteristic of step 19 spermatids and resistant to homogenization were counted to determine the number of elongated spermatid nuclei. To calculate the DSP, the number of spermatids was divided by TABLE 1 | Rat-specific primers sets for analysis by RT-qPCR. Gene Primer sequence Product size (PB) Accession number FWD, forward; REV, reverse. 6.1, which is the number of days of the seminiferous cycle in which these spermatids are present in the seminiferous epithelium (29). Assessment of Sperm Parameters At PND 90, sperm were recovered from cauda epididymis. The epididymides were placed in a conical tube, covered with 750 µl of fresh medium (30) and sperm were allowed to swim up at 37°C. After 10 min, aliquots of the upper sperm layer were recovered for motility and morphology evaluation. To evaluate total motility (progressive + nonprogressive), sperm suspensions were placed on pre-warmed slides and analyzed subjectively under a light microscope (400× magnification). To assess sperm morphology, a 10 µl aliquot of the sperm suspension was smeared over a microscope slide. After drying in air, the smear was fixed with methanol for 5 min, washed with distilled water, stained with Harris hematoxylin for 15 min, and finally washed with tap water. Sperm morphology was evaluated using a Nikon microscope (Nikon Instruments Inc., Melville, NY, USA) at 1000× magnification. In all cases, at least 200 spermatozoa from each sample were assessed. Statistical Analysis At least five animals per treatment group were used, and data were presented as mean ± SD. The variables under study were first submitted to tests of normal distribution (Shapiro-Wilk's Test) and for homogeneity of variances (Levene's Test). Then, one-way ANOVA followed by Tukey-Kramer test for the comparison of multiple groups was performed. To assume normal distribution, percentages were expressed as ratios and subjected to the arcsine square root transformation. Results were compared using the unpaired Student t test. Probabilities <0.05 were considered statistically significant. InfoStat 2016 (Grupo InfoStat, Facultad de Ciencias Agropecuarias, Universidad Nacional de Coŕdoba, Argentina) was used. Glyphosate and Roundup Effects on Serum Biomarkers and Testis and Body Weight Initially, rats were exposed to low doses (2 and 50 mg/kg/day) of G or R from PND 14 to 30, a period of life that is essential to complete a functional BTB. Serum biomarkers for renal (urea, creatinine) and liver (aspartate and alanine aminotransferases: AST and ALT) function were analyzed. These biomarkers did not change in treated groups compared with the control ( Table 2). In addition, treatments did not alter liver, kidney, stomach, and intestine histology (data not shown). To explore the impact of G or R exposure on male reproductive system we assessed body and testicular weights after treatments. No differences in body and testis weight and in the ratio testis/body weight at any G or R dose tested were observed ( Table 2). Glyphosate and Roundup Effects on Testicular Histology and Blood-Testis Barrier Function Histological examination of the testes revealed differences in the seminiferous tubules among control and treated groups (Figure 2). Some seminiferous tubules of G2, G50, and R2 groups showed a disorganized epithelium, with an apparent low cellular adhesion. In R50 treated males, tubules presented severe disorganization and epithelial desquamation of the most differentiated cells (spermatocytes and round spermatids). In this group, the percentage of affected tubules is higher than in the other groups tested. Although epithelium disorganization was observed in treated groups, the histological characteristics of Sertoli cells -nucleus located parallel to basement membrane next to spermatogonia and premeiotic spermatocytes and triangular in shape-remained unaltered. Additionally, TUNEL analysis of rat testis section from control and R50 treated animals were performed. Only a few seminiferous tubules with some TUNEL positive cells located in areas not corresponding to positions occupied by Sertoli cells were observed in both groups (Supplementary Figure 1). To analyze the effects of G or R treatment on BTB integrity, the permeability of the BTB was evaluated using a biotin tracer which is excluded from the adluminal compartment of intact seminiferous tubules. Figure 3A shows that the tracer entered the adluminal compartment in animals treated with G or R at both doses tested. Figure 3B shows the data obtained after determining the percentage of permeable tubules in the different experimental groups. A significant increase in the percentage of seminiferous tubules with a permeable barrier in G and R groups was observed. The next set of experiments was performed to evaluate possible mechanisms that would explain the deleterious effects of G or R treatments on BTB permeability. First, as testosterone is the main regulator of BTB formation and integrity, intratesticular testosterone (ITT) levels and androgen receptor (AR) expression were evaluated. Figure 4A shows that G and R treatments did not modify either ITT levels or AR expression. Secondly, as it has been demonstrated that reactive oxygen species (ROS) mediate some of the harmful effects in BTB integrity, we decided to evaluate TBARS levels after G or R treatments. Figure 4B shows that TBARS levels were not modified by G or R exposure. Thirdly, the expression of intercellular junction proteins such as claudin11, occludin, ZO-1, connexin43, 46, and 50 which are components of the BTB, was analyzed. Figure 4C shows that G or R treatments did not modify claudin11, occludin, ZO-1, connexin43, 46, and 50 mRNA levels. Juvenile Roundup Treatment Effects on Adult Animals In order to evaluate possible consequences of juvenile herbicide treatment in adulthood, a set of animals was treated with 50 mg/ kg/day of R from PND 14 to 30 and then allowed to grow until PND 90. Several parameters were analyzed at this age. As it was observed in PND 31, Table 3 shows that no differences in urea, creatinine, AST, and ALT levels between groups were observed. Figure 5A shows the histological examination of testis at PND 90 in control and R50 groups. Tubules had normal cellular associations and complete spermatogenesis. Figure 5B shows the analysis of the frequency of the stages of the seminiferous epithelium. No alteration of the presence of the different stages of the cycle of the seminiferous epithelium was found between groups. Figure 5C shows the study with the biotin tracer and Figure 5D shows the data obtained after determining the percentage of permeable tubules. A small increase in the percentage of permeable tubule in R50 group was observed. Despite this small increase in BTB permeability, testicular weight, and DSP were not modified by juvenile treatment with R ( Table 3 and Figure 6A). Cauda epididymal sperm were used to analyze sperm motility and morphology. No differences in motility and morphology were observed in sperm of both groups (Figures 6B, C). In addition, no changes were observed in epididymal weight and in epididymal/body weight ratio between groups ( Table 3). DISCUSSION Mounting evidence indicates a declining trend in the male reproductive health of both wildlife and humans. A metaanalysis reporting a significant decline in sperm counts between 1973 and 2011 among men unselected by fertility from North America, Europe, Australia, and New Zealand (31) arose considerable scientific and public concern regarding the adverse effects of various environmental contaminants on male reproduction. Thus, several studies were conducted to assess the impact of xenobiotics on male reproductive health. In this context, the juvenile population requires special attention due to a higher sensitivity to xenobiotic exposure (32). Furthermore, evidence shows the impact of pollutants on development and reproductive functions (33)(34)(35). Notwithstanding the foregoing, no studies are available regarding the consequence of G or R exposure during critical periods of the male reproductive development, such as when a functional BTB is established. Therefore, the objective of our work was to evaluate the impact of low doses of G and R exposure of juvenile rats on the assembly of BTB and its possible influence on adult life. The BTB provides a unique microenvironment in the adluminal compartment of the seminiferous epithelium throughout the isolation of postmeiotic germ cells from blood stream and its capacity to regulate the entry of substances (36). Any imbalance in the composition of the adluminal compartment fluid, as a consequence of BTB damage, leads to impaired spermatogenesis. The relationship between BTB disfunction and impediment of meiosis and spermatogenesis was widely documented along the years. In this context, several experimental approaches which caused BTB disruption result in disturbance of seminiferous epithelium homeostasis and ultimately loss of spermatogenesis. Among them, it is relevant to highlight the analysis of knockout mice for genes encoding claudin11 (37) and occludin (38), which despite displaying variable phenotype, in these models it is observable that spermatogenesis does not proceed. In addition, rats with autoimmune orchitis display loss of occludin expression and consequently an increment in BTB permeability associated with damage of seminiferous epithelium and disruption of spermatogenesis (27). Although the reason for declining spermatogenesis during aging remains largely unknown, it is suspected that impaired function of the BTB might account for it (39). It is also worth mentioning that environmental toxicants, such as cadmium and bisphenol A, induce BTB disruption eliciting subsequent damage to germ-cell adhesion, thereby leading to germ-cell loss, reduced sperm count, and male infertility or subfertility (40)(41)(42). Overall, there is a consensus that BTB impairment leads to spermatogenesis arrest. As for the evaluation performed on PND 31 rats, biochemical markers of hepatic and renal function were determined as indicators of toxicity. Several studies have shown altered hepatic biochemical parameters using different models of doses and exposition however, the majority of them performed the analysis administrating high doses which lack of toxicological relevance (43,44). However, under our experimental conditions G or R treatments did not affect ALT and AST levels. In addition, body, testis, and relative testis weight were not affected by the treatments; nevertheless, the lack of effect on organ weight should not be used to neglect other biological effects on testis that may be more sensitive to the toxicants. To this respect, it is worth mentioning that several changes in histological structure of rat testis after G or R treatments from PND 23 for 35 days have been previously observed. Nardi et al (45) showed a decrease in Sertoli and Leydig cell number and an increase in the percentage of degenerated Sertoli and Leydig cells after daily exposure of juvenile rats to 50 and 100 mg/kg of G. Moreover, Romano et al (10) showed an increase in luminal diameter of seminiferous tubules and a reduction in seminiferous epithelium after 5, 50, or 250 mg/kg/day of R treatment. In the present study, testicular histological analysis of juvenile animals exposed to doses of 2 and 50 mg/kg/day of herbicides from PND 14 to PND 30 showed a disorganized epithelium with an apparent low cellular adhesion, effects that are more pronounced when animals were exposed to 50 mg/kg/day of R. Although epithelium disorganization was observed in treated groups, the histological characteristics of Sertoli cells remained unaltered. In addition, Sertoli cell apoptosis was not observed. As mentioned above, it is well known that the BTB of adult and juvenile individuals is a target of multiple xenobiotics (19,(46)(47)(48)(49)(50). In our study BTB impairment, demonstrated by the significant increase in BTB permeability in both G-and R-treated groups, has been observed. This alteration occurred in both Gand R-treated groups at both doses tested. These results suggest that BTB function of juvenile rats may result injured after exposure to low doses of the herbicide. Androgens are crucial to establish and maintain BTB integrity (51)(52)(53). In this context, it is tempting to speculate that G or R effects on BTB integrity can be partially attributed to deleterious effects on androgen action-testosterone production and/or androgen receptor expression. In the present study, no significant differences among groups in intratesticular testosterone levels were observed. Androgen receptor has also been postulated as a target for the deleterious effects of G or R, however, results are somehow controversial. On the one hand, it has been shown that treatment of drakes with R decreases androgen receptor expression in Sertoli cells (54). On the other hand in rats, it has been observed that G treatment does not modify Sertoli cell androgen receptor expression (55). The results presented herein show that G or R treatments did not modify testicular androgen receptor mRNA levels under our experimental conditions. Altogether, these results suggest that testosterone levels or androgen receptor expression are not possibly responsible for the observed effects of G or R on BTB integrity in our experimental model. An additional aspect to be considered is related to the growing evidence indicating that exposure to different classes of environmental toxicants commonly increases oxidative stress in the testis (56)(57)(58)(59)(60). As it has been observed that G and R increase oxidative stress in rat liver (61,62), we then hypothesized that herbicides might cause a disturbance in redox balance that could be responsible for the alteration in BTB permeability. However, the results presented herein suggest that it might not be the case as G and R treatment did not modify TBARS levels. Cell junction proteins have been regarded as early targets for different classes of reproductive toxicants (48,63). To this respect, a decrease in the expression of junction proteins that participate in BTB formation after exposure to several toxicants has been demonstrated (47,59,64). Therefore, we decided to analyze occludin, claudin11, and ZO-1 as representative tight junction proteins and connexin43, 46, and 50 as representative gap junction proteins of BTB. In this regard, we observed that G or R treatments did not modify the expression of these junction proteins suggesting that the increase of BTB permeability may not be attributed to a reduction in their expression. Delocalization of claudin11 after herbicide treatment in vitro has been previously observed and would explain, at least in part, the effects of G and R in the integrity of BTB (20). Bearing in mind that Sertoli cell apoptosis, TBARS and intratesticular testosterone levels, and androgen receptor and junction protein expression remained unchanged among experimental groups, additional experiments will be necessary to determine how G and R alter BTB permeability. In order to determine whether herbicide exposure in juvenile animals has any effect on adult reproductive function, a group of animals treated with 50 mg/kg/day of R from PND 14 to PND 30 was evaluated on PND 90. Testicular architecture was apparently recovered at PND 90 and no alteration of the frequency of the stages of the cycle of the seminiferous tubule epithelium was observed between groups. When evaluating BTB function, an increase in BTB permeability was observed in treated animals, however, the DSP and testicular weight were not modified. Finally, as reduction in sperm motility after direct G or R treatment had been demonstrated (65)(66)(67)(68), we decided to analyze epididymal sperm parameters. In the study presented herein R did not change epididymal sperm morphology and motility. Altogether, these results suggest that the alterations observed after treatment during the juvenile period turn out to be reversible and do not modify adult sperm production, morphology, and motility. The use of glyphosate as an herbicide continues to expand and the doses declared as safe must be continually re-evaluated. Although the EPA establishes that a dose of 1 mg/kg/day is safe and harmless, our results call to be cautious about this statement. The results presented herein suggest that toxic effects may appear even with doses declared safe by EPA. In addition, even though the effects of exposure in juvenile stages seem to be reversible, more studies will be necessary to determine what happens to the permeability of BTB if exposure continues into adulthood. On the other hand, the reversible nature of the effects may enable, if these effects take place in humans, to apply strategies for the exposed populations: early detection and removal of exposure sources and/or migration of exposed inhabitants from contaminated areas. In conclusion, the results presented herein show that continuous exposure to low doses of G or R alters BTB permeability in juvenile rats. Considering that DSP in adult animals, which have been unexposed to the herbicide for a prolonged period, is indistinguishable from that in control animals, it is feasible to think that BTB impairment is a reversible phenomenon. These results warrant further investigation of glyphosate-mediated reproductive damage of human juvenile populations exposed to low doses of G or R. Such analysis may include the determination of follicle stimulating hormone (FSH), luteinizing hormone (LH), anti-Müllerian hormone (AMH), inhibin B, and testosterone serum levels to get insight into Sertoli cell maturation in juvenile people as well as semen analysis in adult population. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The animal study was reviewed and approved by ComiteÍ nstitucional de Cuidado y Uso de Animales de Laboratorio (CICUAL) from the Hospital de Niños Ricardo Gutieŕrez (Res #2018-002). ACKNOWLEDGMENTS We express gratitude to Dr. Maria Gabriela Ballerini and Dr. Maria Gabriela Ropelato for helping us with testosterone assay. The technical help of Evelin Barrios, Mariana Cruz, and Mercedes Astarloa is gratefully acknowledged. SUPPLEMENTARY MATERIAL The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fendo.2021. 615678/full#supplementary-material Supplementary Figure 1 | Effect of R treatment on testicular apoptosis. Animals (n=3/group) were treated with 50 mg/kg/day of R from PND 14 to 30. At PND 31, animals were euthanized, and testes were removed. Testis sections (3-5 mm) were used for TUNEL analysis. Representative photomicrographs of TUNEL assay are shown. Arrowheads indicate TUNEL-positive cells (green). Cell nuclei were dyed with Hoechst (blue). Scale bar, 10 mm.
2021-03-11T14:10:02.877Z
2021-03-11T00:00:00.000
{ "year": 2021, "sha1": "4183ebbc654e754cf0b99e6c3b0baf8a5100c167", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fendo.2021.615678/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4183ebbc654e754cf0b99e6c3b0baf8a5100c167", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
7257235
pes2o/s2orc
v3-fos-license
Presence of Poly(A) Tails at the 3'-Termini of Some mRNAs of a Double-Stranded RNA Virus, Southern Rice Black-Streaked Dwarf Virus Southern rice black-streaked dwarf virus (SRBSDV), a new member of the genus Fijivirus, is a double-stranded RNA virus known to lack poly(A) tails. We now showed that some of SRBSDV mRNAs were indeed polyadenylated at the 3' terminus in plant hosts, and investigated the nature of 3' poly(A) tails. The non-abundant presence of SRBSDV mRNAs bearing polyadenylate tails suggested that these viral RNA were subjected to polyadenylation-stimulated degradation. The discovery of poly(A) tails in different families of viruses implies potentially a wide occurrence of the polyadenylation-assisted RNA degradation in viruses. Introduction RNA of many eukaryotic viruses, ranging from DNA to RNA viruses, have 3' poly(A) tails [1], which are synthesized not only posttranscriptionally, but also by direct transcription from the poly(U) stretched template strand [2][3][4][5]. Regardless of synthesis mechanism used, the viral poly(A) tails have been considered to play crucial roles in RNA stability and translation, resembling roles of the stable poly(A) tails in eukaryotic mRNA [6,7]. Until recently, the function of poly(A) tails in destabilizing the viral RNA was revealed. The viral mRNA containing poly(A) or poly(A)-rich tails were detected in HeLa cells infected with Vaccinia virus (a double-stranded [ds] DNA virus) [8]. Furthermore, the polyadenylate tails were also found in Tobacco mosaic virus (TMV), Cucumber mosaic virus (CMV), Odontoglossum ring-spot virus (ORSV), Cucumber green mottle mosaic virus (CGMMV), Tobacco rattle virus (TRV), Turnip crinkle virus (TCV) and Tobacco necrosis virus (TNV) [9], seven positive-strand RNA viruses known to lack poly(A) tails and terminate 3'-termini with tRNA-like structure (TLS) or non-TLS heteropolymeric sequence [6]. The presence of poly(A) tails suggests that these viral RNAs are subjected to poly(A)-stimulated degradation. In this paper, the poly(A) and poly(A)-rich tails were first reported at the 3'-termini of the mRNAs of a dsRNA virus, Southern rice black-streaked dwarf virus (SRBSDV), generally recognized to lack poly(A) tails. SRBSDV has been proposed as a new member in the genus Fijivirus of the family Reoviridae [10], which causes a serious rice disease in South China and Vietnam in recent years [11,12]. SRBSDV is most closely related to but distinct from Rice black-streaked dwarf virus (RBSDV), which is also a member of the Fijivirus genus [10,13]. SRBSDV genome contains 10 segments, named as S1-S10 in the descending order of molecular weight. Comparison of 10 genomic segments of SRBSDV with their counterparts in RBSDV suggests that SRBSDV encodes 13 open reading frames (ORFs) and possesses 6 putative structural proteins (P1, P2, P3, P4, P8, and P10) and 7 putative nonstructural proteins (P5-1, P5-2, P6, P7-1, P7-2, P9-1 and P9-2) [13]. At present, the functions of partial genes have been studied. The P6, encoded by S6, has been identified as an RNA silencing suppressor [14]. P7-1 induces the formation of tubules as vehicles for rapid spread of virions through basal lamina from midgut epithelium in its vector, the white-backed planthopper [15]. P9-1 is essential for viroplasm formation and viral replication in non-host insect cells and vector insects [16]. However, no reports are available to date to assign functions to the proteins encoded by other ORFs. The putative function of these proteins can only be postulated based on their RBSDV homologs. P1, P2, P3 and P4 are putative RNA-dependent RNA polymerase (RdRp), core protein, capping enzyme and outer-shell B-spike protein, respectively [13,17]. P8 and P10 are putative core and major outer capsid proteins, respectively [13,18]. SRBSDV mRNAs were considered to lack of poly(A) tails at the 3'-ends. However, in previous experiments, all 13 ORFs of the 10 RNA segments could be amplified via RT-PCR using oligo(dT)18 to prime cDNA synthesis as templates [19], suggesting that each SRBSDV mRNA might bear a potential poly(A) tail at the 3' terminus. In this paper, we confirmed that some of SRBSDV mRNAs were indeed polyadenylated at the 3' terminus in plant hosts. Virus and RNA Extraction SRBSDV isolate used in the experiment was obtained from rice and maize plants showing typical dwarf symptoms with white waxy galls in 2014 in 8 counties of 4 provinces in China, including Yunnan, Guizhou, Hunan, and Jiangxi provinces. Total RNA from infected rice and maize leaf and stem tissue were extracted following the standard protocol of TRIzol reagent (Invitrogen, Carlsbad, CA, USA). The isolate was identified as SRBSDV excluding RBSDV by reverse transcription RT-PCR using specific primers for distinguishing the two viruses [20]. Rapid Amplification of cDNA End (RACE) PCR To confirm characterization of the polyadenylate tails associated with viral mRNAs, the 3' Rapid Amplification of cDNA End (RACE) PCR was performed using BD SMART™ RACE cDNA Amplification Kit (TaKaRa, Dalian, Liaoning, China). In this case, reverse transcription reactions were performed using total RNA (respectively from infected rice and maize) as templates and adapter-oligo(dT) primer (P1) ( Table 1) to prime first cDNA strand synthesis. 10 specific upstream primers and 10 nested primers respectively corresponding to SRBSDV each mRNA were designed according to China isolate HuNyy sequence information (GenBank No. JQ034348-JQ034357) ( Table 1). Each of upstream primers was paired with adapter primer P2 (as downstream primer) for the 1st PCR amplification using PrimeSTAR HS DNA polymerase (TaKaRa) and cDNA as template. The PCR products from the 1st PCR reaction were subjected to a subsequent the 2nd PCR run with nested primers and adapter primer P3 ( Figure 1A). The amplified products were analyzed by 1.5% agarose gel electrophoresis, and the resulting bands, in agreement with the predicted sizes, were individually cloned into pGEM-T Easy vector (Promega, Madison, USA) and subjected to sequence analysis. Approximately 5-10 clones from each isolate were randomly selected and sequenced. Results and Discussion After 3' RACE, the 3'-termini sequences of viral mRNAs were obtained, and the results indicated that SRBSDV mRNAs indeed possessed ploy(A) or poly(A)-rich tails in plant hosts. Taking S10-mRNA as an example to analyze the nature of poly(A) and poly(A)-rich tails, a total of 42 polyadenylated viral mRNA molecules were cloned from rice and maize plants. In addition to 10 mRNAs bearing poly(A) tails exclusively comprised of adenosines, a large number of mRNAs possessed poly(A)-rich tails ( Figure 1B). Notably, the heterogeneity of these poly(A)-rich tails was confined to their 5' ends, and they all terminated in homogenous adenosines (17-23 nt) ( Figure 1B), which was possibly due to the 3' bias of oligo(dT)-dependent reverse transcription. Most poly(A)-rich tails were not at the downstream of S10-mRNA entire 3' untranslated region (UTR), and replaced partial 3' UTR sequences. For example, the tail of isolate LX-1 replaced 3' UTR sequence of S10-mRNA from the nucleotide 1753 ( Figure 1B). In some poly(A)-rich tails (isolate JH-1, LX-1, PT-1, PT-5, YJ-1 and YJ-4), there were more non-viral nucleotides (35-208 nt) preceded polyadenylates, which was considered to originate from host plants. In order to further certify the presence of poly(A) tails and exclude non-specificity of reverse transcription reaction, these non-viral nucleotides was used to design downstream primers (e.g., S10-NVP) to perform PCR with upstream primer from S10 ( Figure 1A), and the result of amplification was positive (data no shown), indicating sufficiently the existence of mRNA bearing ployadenylate tails. Moreover, poly(A) or poly(A)-rich tails were also discovered at the 3'-ends of viral S1-S9 mRNAs ( Figure 2). All amplified products based on 3' RACE were weak (data no shown), implying that a small fraction of SRBSDV mRNAs was polyadenylated. (Table 1), and the gray box, black box and red box indicate respectively partial ORF, 3' UTR and non-viral nucleotides in S10-mRNA. To our knowledge, dsRNA viruses are lack of poly(A) tails at the 3'-ends of the genome segments and their mRNAs. Interestingly, in this paper, we demonstrated that some viral mRNA molecules were polyadenylated at their 3'-terminus in plant cells infected with SRBSDV (a dsRNA virus). Besides their crucial roles for mRNA stability and translation efficiency, the polyadenylate tails were recently described as involved in viral RNA degradation [8]. The Poly(A)-stimulated RNA degradation occurs throughout the prokaryotic and eukaryotic cells [21][22][23][24][25][26]. Generally, the degradation process comprises three sequential steps: endonucleolytic cleavage, addition of polyadenylate tails to the cleavage products, and exonucleolytic degradation [21,26,27]. The transient poly(A) or poly(A)-rich stretches can act as landing sites to recruit 3'-5' exoribonucleases for further degradation [21,22,26,27], which might be one of ancestral roles of polyadenylation. This evolutionarily conserved mechanism has been confirmed to play critical roles in rapidly removing redundant RNAs in cells, thereby maintaining the stability of gene expression [26,28,29]. In this study, the non-abundant presence of SRBSDV mRNAs bearing polyadenylate tails was considered to represent degradation intermediates of an RNA decay pathway, rather than to convey protection to mRNAs. Recently, a dsDNA virus, Vaccinia virus, was linked with the conserved RNA degradation mechanism, and non-abundant, fragmented viral mRNAs bearing poly(A) or poly(A)-rich tails were detected in human cells infected with this virus [8]. Such polyadenylation-stimulated RNA degradation was also found in seven positive-strand RNA viruses from distinct virus families and genera known to lack poly(A) tails [9]. The discovery of poly(A) tails in three different types of viruses (positive-strand RNA virus, dsDNA and dsRNA virus) implies potentially a wide occurrence of the polyadenylation-assisted RNA degradation in viruses, which might represent a yet-unknown interaction between virus and host.
2016-03-14T22:51:50.573Z
2015-03-31T00:00:00.000
{ "year": 2015, "sha1": "7a7314914602b479cfcfcc61b9ea727c0949ac11", "oa_license": "CCBY", "oa_url": "http://www.mdpi.com/1999-4915/7/4/1642/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7a7314914602b479cfcfcc61b9ea727c0949ac11", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
238355675
pes2o/s2orc
v3-fos-license
Spontaneous Regression of a Middle Ear Melanoma Objective: To describe a case of complete spontaneous regression of a middle ear melanoma. Patient: We present a case of a 68-year-old man with complaints of unilateral hearing loss and an ipsilateral facial nerve paresis. Radiological and histopathological examination revealed a cT4bN0M0 mucosal melanoma of the middle ear. Interventions: The patient underwent a subtotal petrosectomy and postoperative radiotherapy. Main Outcome Measure: Computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography/computed tomography with 2-[fluorine-18]-fluoro-2-deoxy-D-glucose (FDG-PET-CT), and histopathological examination. Results: After subtotal petrosectomy, histopathological examination of the resection specimen showed only fibrosis and a histiocytic and clonal T-cell infiltration, but no residual melanoma at the primary tumor site, consistent with spontaneous tumor regression. Follow-up MRI scanning 6 and 12 months after radiotherapy showed no signs of tumor recurrence. Conclusions: This case describes the concept of spontaneous regression of a mucosal melanoma of the middle ear. Spontaneous tumor regression at this location has not been described before. CLINICAL CASE A 68-year-old male patient was referred to the department of otorhinolaryngology because of left-sided, progressive hearing loss. He was suspected of having an otitis media with effusion. However, routine clinical examination after 3 months showed an atypical granulomatous presentation of the tympanic membrane and therefore a biopsy was taken. Histopathological examination showed a highly atypical and mitotically active stromal melanocytic proliferation, positive for S100, Melan-A, and MITF ( Fig. 1A-D), consistent with the diagnosis of a mucosal melanoma. Subsequently the patient was referred to our institute. One week after biopsy, the patient visited our outpatient clinic. At that moment the unilateral hearing loss had increased and an ipsilateral facial nerve paresis (House-Brackmann [HB] V/VI) was almost in remission. The patient had no complaints of otalgia or otorrhea and there was no history of an ear trauma. He had a history of hypertension, atrial fibrillation, coronary artery disease, depression, and obstructive sleep apnea syndrome. There was no family history of melanoma. At physical examination, inspection of the facial musculature showed a mild degree of asymmetry of lip closure (HB II/VI). Otomicroscopy showed a mass at the level of a seemingly thickened (or absent) tympanic membrane with blue discoloration at the posterior upper and lower quadrant post-biopsy. This mass was non-pulsatile and the usual anatomical landmarks such as the annulus fibrosus tympanicus and the tympanic membrane could not be identified. The Rinne test using a 512-Hz tuning fork was negative on the left side and the Weber test was lateralized to the left ear. Further physical examination including nasal endoscopy and palpation of the neck showed no abnormalities. The audiometry examination showed a left-sided mixed hearing loss of 92 dB HL with maximum speech recognition of 88% at 110 dB SPL. Computed tomography (CT) and magnetic resonance imaging (MRI) revealed soft tissue invasion of the left mastoid bone and the middle ear cavity, completely embedding the ossicular chain (Fig. 2). Dehiscence of the bony facial nerve canal, tegmen tympani, and middle cranial fossa plate with contrast enhancement of soft tissue protruding through these defects, was observed. An ultrasound of the neck was performed and revealed an enlarged left-sided neck node. Fine needle aspiration cytology showed reactive lymphoid cells without melanoma. Positron emission tomography/computed tomography with 2-[fluorine-18]-fluoro-2-deoxy-D-glucose (FDG-PET-CT) demonstrated intense FDG-uptake at the left middle ear cavity indicative for a malignant tumor based on the histopathological diagnosis (Fig. 3A). Only moderate FDG uptake was seen at the level of the external auditory canal and the mastoid bone. There was no evidence of regional or distant disease. The tumor was staged according to the 8th TNM Classification for Mucosal Melanoma of the Head and Neck as a cT4bN0M0 malignant mucosal melanoma originating from the left middle ear cavity (1). One month after presentation at our outpatient clinic, a subtotal petrosectomy was performed. During surgery, suspicious brown-colored granulation tissue was obtained from the middle ear cavity and mastoid bone. Furthermore, several defects of the tegmen tympani were present at the level of the middle and posterior cranial fossa. The granulation tissue was removed from the mesotympanum, hypotympanum, sinus tympani, oval window, antrum, mastoid cavity, and perilabyrinthine cell tracts. Microscopically radical resection of the aberrant tissue was not possible due to the extensive spread. The middle ear cleft was obliterated with abdominal fat. Histopathologically, the resection specimen was completely blocked and numerous additional levels were made, but no residual melanoma cells could be found. However, a moderately dense CD3 þ , partly CD4 þ , partly CD8 þ T-cell infiltrate, and CD68 þ histiocytic infiltrate was observed within a large area of fibrosis and dilated blood vessels. Additional melanocytic stainings (Melan-A, S100, and SOX10) were all negative (Fig. 4). The area of fibrosis was consistent with the previous biopsy site. These histopathological findings are in accordance with spontaneous regression of the mucosal melanoma (2). Furthermore, additional T-cell antigen receptor (TCRg and TCRb) clonality assays (PCR-based according to Euroclonality, BIOMED) showed a highly clonal T-cell response. To rule out accidental mix-up of samples, microsatellite examination was performed on the biopsy and the resection specimen, confirming that both specimens were from the same patient. One month after surgery, the facial nerve paresis was in complete remission (HB I/VI) and a FDG-PET-CT followup scan revealed minimal soft tissue induration at the petrosectomy site and mild FDG uptake with no evidence of residual tumor (Fig. 3B). Since microscopically radical resection was not possible, postoperative radiation therapy was discussed with the patient. The patient received postoperative local radiation therapy, 66 Gray in 33 fractions. Follow-up MRI scanning 6 and 12 months after radiotherapy showed no signs of tumor recurrence, regional or distant disease. Unfortunately, the patient died 13 months after treatment from an acute myocardial infarction. DISCUSSION Malignant melanoma develops through the neoplastic transformation of pigment-producing cells, usually located in the epidermis and dermis (2). Accumulation of mutations occurs in genes responsible for cell proliferation and apoptosis. Melanocytes are derived from neural crest cells and therefore, melanomas can develop in various places in the body (3,4). Although melanomas are usually cutaneous in nature, they can also occur in various extracutaneous sites including ocular, mucosal, and leptomeningeal locations (4-6). Mucosal melanomas are rare and represent 1.4% of all melanomas (7,8). Mucosal melanomas arise from mucosal epithelium anywhere in the respiratory, gastrointestinal and genitourinary tracts (7). Head and neck mucosal melanomas are predominantly found in the sinonasal region and oral cavity (6,7). This is usually an aggressive form of cancer without symptoms in the early stages because of these ''hidden'' sites, resulting in late diagnosis and poor prognosis (6). The main function of melanocytes is pigmentation and UV protection of the skin and the eyes. However, melanocytes are also present at sun-protected mucosal areas of the body where they are also believed to play a role in the immune system because of their phagocytic, antigen-presenting, and cytokine-producing properties (6). Primary mucosal melanomas of the head and neck are uncommon and because of their rareness, knowledge on their pathogenesis and prognosis is scarce (7). In the present case, no residual melanoma cells were found in the petrosectomy specimen. Regression of malignant melanoma is histologically characterized by the disappearance of neoplastic melanocytes which can occur spontaneously or in response to treatment (2). , dilated blood vessels (v) and a lymphohistiocytic cell infiltration (#). In the right upper quadrant (hematoxylin-eosin [HE] staining) residual middle ear epithelium is seen (arrows). S100 stains histiocytic and dendritic-type cells in brown, no residual melanoma cells. Melan A is negative (brown background pigments are iron deposits). CD3 shows the T-cell infiltration, which is partly CD8-positive (brown dots). Perls (Fe) staining shows the iron deposits in blue corresponding to the intraoperatively described brown-colored tissue. Histological regression of a primary melanoma can be partial, segmental, or complete and is more common in melanomas compared with other types of tumors (4,5). It is estimated that about 50% of non-metastatic melanomas (mucosal melanoma included) partly regress spontaneously (2). Spontaneous regression in metastatic melanomas is rare and the pathophysiology of regression of mucosal melanoma is not completely understood. The majority of spontaneous regression studies in mucosal melanoma refer to the cutaneous type and suggest that it may be caused by an interaction between melanoma cells and the host immune system, resulting in killing of melanoma cells, proliferating lymphocytes, telangiectasia, and replacement of tumor tissue by fibrosis (2,5,9,10). Various clinical signs of spontaneous regression have been described such as hypopigmentation, telangiectasia, size reduction, and scarring (2,5). The intraoperatively described suspicious ''brown-colored granulation tissue'' was histopathologically based on fibrotic scar tissue with inflammation, a macrophage clean-up response and iron deposition explaining the discoloration. Besides immunological and inflammatory factors, blood transfusion, pregnancy, and endocrinological conditions such as diabetes mellitus have been associated with spontaneous regression in the literature (2,5,11). Our patient was not known with any of these conditions. It has also been suggested that infection and surgery elicit an increased host immune response directed against specific melanocyte-associated antigens such as Melan-A (5,11,12). Furthermore, surgical intervention may lead to an interruption of blood supply resulting in disintegration of the tumor (11). In this context, one could hypothesize that the previous biopsy of our patient elicited an immune reaction. Spontaneous regression is thought to result from an effective immune response with CD4 þ and CD8 þ T-cells directed against melanoma cells (2). These infiltrating T-cells have a clonal T-cell receptor profile. In our case, a CD4 þ , CD8 þ T-cell, and CD68 þ histiocytic infiltration was present within a large area of fibrosis in the petrosectomy specimen. These histopathological findings were present in an area consistent with the previous biopsy site, supporting spontaneous regression of mucosal melanoma. Additional T-cell antigen receptor clonality assays (EuroClonality/BIOMED-2 Ig/TCR) performed on tissue from this area showed a highly clonal T-cell response, further supporting the phenomenon of regression (13). Few cases of primary middle ear mucosal melanomas (n ¼ 12) have been reported in the literature (9,10,14). Oliveira et al. (9) describes a case of spontaneous regression of an oral mucosal melanoma in a patient who refused treatment with no signs of tumor recurrence 6 years after treatment. Patients from the other reports underwent multimodality treatment such as surgery followed by radiotherapy and/or systemic therapy. Middle ear melanoma was accompanied by poor outcomes showing 20% local recurrence, 50% distant disease, and a 70% mortality rate (10,14). Spontaneous tumor regression at this location has not been described before.
2021-10-06T06:17:00.142Z
2021-09-30T00:00:00.000
{ "year": 2021, "sha1": "801556ca0a6180f058b451bbf36bb6a9ef3c68db", "oa_license": "CCBY", "oa_url": "https://journals.lww.com/otology-neurotology/Fulltext/2021/12000/Spontaneous_Regression_of_a_Middle_Ear_Melanoma.41.aspx", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "b54e0057fb7bf8bcbb288901f60562ed021c3daa", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
225083497
pes2o/s2orc
v3-fos-license
Expression Quantitative Trait Locus Mapping in Pulmonary Arterial Hypertension Expression quantitative trait loci (eQTL) can provide a link between disease susceptibility variants discovered by genetic association studies and biology. To date, eQTL mapping studies have been primarily conducted in healthy individuals from population-based cohorts. Genetic effects have been known to be context-specific and vary with changing environmental stimuli. We conducted a transcriptome- and genome-wide eQTL mapping study in a cohort of patients with idiopathic or heritable pulmonary arterial hypertension (PAH) using RNA sequencing (RNAseq) data from whole blood. We sought confirmation from three published population-based eQTL studies, including the GTEx Project, and followed up potentially novel eQTL not observed in the general population. In total, we identified 2314 eQTL of which 90% were cis-acting and 75% were confirmed by at least one of the published studies. While we observed a higher GWAS trait colocalization rate among confirmed eQTL, colocalisation rate of novel eQTL reported for lung-related phenotypes was twice as high as that of confirmed eQTL. Functional enrichment analysis of genes with novel eQTL in PAH highlighted immune-related processes, a suspected contributor to PAH. These potentially novel eQTL specific to or active in PAH could be useful in understanding genetic risk factors for other diseases that share common mechanisms with PAH. RNA sequencing and transcript abundance estimation Whole blood (3ml) was collected in Tempus™ Blood RNA Tubes, which were stored at -80 o C until required. RNA was extracted using a Maxwell robotic system (Promega). Samples with a 260/230 ratio >1.5 and a 260/280 ratio in the range 1.9-2.1 were further quality checked by Bioanalyser and those achieving a minimum RNA Integrity Number (RIN) of 7 were submitted for sequencing. Globin-Zero Gold rRNA Removal Kits (Illumina Inc, San Diego, CA) were used to remove ribosomal RNA contamination from whole blood RNA samples. 75bp paired-end sequencing on a Hiseq4000 was performed on pooled libraries of ~80 samples. Fastq files (raw reads from RNAseq) were analysed using Salmon v0.9.1 (Patro et al., 2017) and GENCODE release 28 to produce transcript abundance estimates which were converted to gene expression data using tximport in R (Soneson et al., 2015). Salmon, the first transcriptome-wide quantifier to correct for fragment GC-content bias was used, which substantially improves the accuracy of abundance estimates and the sensitivity of subsequent differential expression analysis (Patro et al., 2017). eQTL validation procedure To assess the extent to which expression quantitative trait locus (eQTL) -transcript pairs in PAH overlap with previously reported eQTL-transcript pairs described in healthy populations, we calculated the validation rate of our findings in the two largest published eQTL studies to date and the Genotype-Tissue Expression (GTEx) Project (Westra et al., 2013, Joehanes et al., 2017, Aguet et al., 2019. Validation rate was defined as the number of significant eQTL-transcript pairs in this study confirmed by the published study divided by the total number of significant eQTL-transcript pairs that were tested by the published study and multiplied by one hundred. We extracted all eQTL with effects below the study specific significance threshold from the published studies` results for all significant transcripts in this study. Ensembl identifiers were used for the transcripts and genomic coordinates (chromosome and base pair position) on the Genome Reference Consortium Human Build 37 for eQTL when matching my eQTL-transcript pairs to those published by the external studies. An eQTL-transcript pair was considered confirmed if the lead variant was in linkage disequilibrium (r 2 ≥ 40%) with the lead eQTL of the same transcript in the published study. For this purpose, a list of correlated variants was obtained from the European population of the 1000 Genomes Project (Durbin et al., 2010) -using the R package 'proxysnps' -for each lead eQTL reported by the studies used for validation. Each eQTL-transcript pair that reached the study-specific significance threshold in at least one of the studies was considered confirmed. Since the complete list of variants or transcripts that passed study-specific quality control is not usually made available by published studies, we had to make assumptions when determining the total number of my significant eQTL-transcript pairs tested in the external studies. This did not apply to GTEx results where all tested eQTL-transcript pairs could be retrieved. All transcripts present on the expression array used in the other two studies were assumed to have been tested. Annotation files were downloaded from the manufacturer`s website for the complete list of transcripts present on the expression arrays. Additionally, all eQTL in this study were assumed to have been available for analysis in the studies used for validation which used genotyping array data imputed to high-density reference panels. We restricted our analyses to common variants with a minimum minor allele frequency of 5% which is at least as high as those of the published eQTL studies used for validation. eQTL studies used for validation We selected two of the largest published eQTL studies and the GTEx Project to compare our results to. Westra et al. (Westra et al., 2013) meta-analysed eQTL effects from seven studies totaling 5,311 individuals to identify cis-acting effects genome-wide as well as trans-acting effects of 4,542 variants implicated in diseases and traits from the GWAS Catalog (Buniello et al., 2019) at the time of their study. The other eQTL study used for validation was published by Joehanes et al. (Joehanes et al., 2017) who conducted the largest to-date single cohort transcriptome-wide analysis testing both cisand trans-acting elements genome-wide in the whole blood samples of 5,257 individuals from the Framingham Heart Study. The GTEx Project aims to create a database of genotype and gene expression correlations in multiple human tissues of consenting donors to aid the scientific community in understanding inherited disease susceptibility. The GTEx website provides a browser (https://gtexportal.org/home/testyourown) for retrieving results for any given variant and transcript pair based on Ensembl identifiers in the tissue of interest. The current release (V8) of the GTEx Project has 838 post-mortem donor samples with genetic data of which 670 contributed to the eQTL mapping in whole blood (GTExPortal, Aguet et al., 2019). Only tissue samples that passed histological examination were accepted for the project, however tissue exclusions were not made based on cause of death. Demographics and cause of death statistics can be found on the GTEx Portal (https://gtexportal.org/). Westra et al. meta-analysed seven studies that measured gene expression in peripheral blood on one of Illumina`s whole-genome Expression BeadChips (HT12v3, HT12v4 or H8v2 arrays). Since most of the cohorts used the HT12v3 array, the analyses were restricted to transcripts present on this array (Westra et al., 2013). Joehanes et al. used the Affymetrix Human Exon ST 1.0 array for the whole cohort (Joehanes et al., 2017). The complete list of transcripts was obtained from the manifest files, namely HumanHt-12_V3_0_R3_11283641_A.bgx for Illumina and HuEx-1_0-st-v2.na33.1.hg19.probeset.csv for Affymetrix, available on the manufacturers` websites. The genotype data from Westra et al. and Joehanes et al. were imputed to the largest haplotype reference panels available at the time of conducting their analyses. Westra et al. mapped cis-eQTL differently from our and the other two studies` approach, using a 250 kilobases (kb) maximum distance from the probe midpoint to demarcate cis effects, while eQTLs with a distance greater than 5 Mb were defined as trans-eQTLs. The validation rate of our trans eQTL was not assessed using Westra`s trans eQTL results as they have only tested a small set of selected variants. Also, cis-eQTL in this study were restricted to the ones not farther than 250 kB from the transcript`s TSS when validating cis-eQTL with Westra`s results. Supplementary Tables Supplementary
2020-10-28T13:05:59.573Z
2020-10-22T00:00:00.000
{ "year": 2020, "sha1": "bbee4eed216f0d768d193681799d17ce9c212c38", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4425/11/11/1247/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ca68c300714e8f313c57247d526d47dfc034ff37", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
122078452
pes2o/s2orc
v3-fos-license
Direct Numerical Simulations of the Kraichnan Model: Scaling Exponents and Fusion Rules We present results from direct numerical simulations of the Kraichnan model for passive scalar advection by a rapidly-varying random scaling velocity field for intermediate values of the velocity scaling exponent. These results are compared with the scaling exponents predicted for this model by Kraichnan. Further, we test the recently proposed fusion rules which govern the scaling properties of multi-point correlations, and present results on the linearity of the conditional statistics of the Laplacian operator on the scalar field. As one of the simplest realisations of a model with turbulent statistics with non-trivial scaling exponents, the Kraichnan model [1] of advection by a white-in-time scaling velocity field has attracted much recent attention [2][3][4][5][6]. The model is analytically tractable, in the sense that its statistical description may be reduced to a set of closed form differential equations for the n-order correlation functions. The model concerns the equation of motion for a passively-advected scalar field T driven by a velocity field u: where κ is the molecular diffusivity. The velocity field is taken to be a Gaussian, white-in-time, incompressible homogeneous scaling random field. Statistical stationarity is achieved through the forcing f , which is also taken to be delta-correlated in time, statistically homogeneous and isotropic, and to exhibit only large-scale spatial components. The parameter of interest in this model is the scaling exponent ζ h characterizing the so-called eddy diffusivity tensor h ij (R) which contains the relevant information about the random velocity field u(r, t): The notation · · · refers to ensemble averaging. Under the conditions that the velocity field exhibits fast temporal decorrelation, scaling and incompressibility, h ij (R) takes the d-dimensional form [1] In the last equation the scaling of h(R) is expressed normalized with respect to L, the outer scale of the velocity field. For physically realisable fields ζ h may vary between 0 and 2. Our aim is to express the statistical properties of the scalar field in terms of the parameter ζ h . The statistics is characterized by the n-point correlators, defined as One expects the correlators to be homogeneous functions of their arguments, F n (λr 1 , λr 2 , .., λr n ) ∼ λ ζn F n (r 1 , r 2 , .., r n ), and one hopes to determine the dependence of the scaling exponents ζ n on ζ h . In this model, the rapid temporal decorrelation of the velocity allows one to derive a set of closed equations for these correlation functions [1] where F 2n is a function of the 2n variables r 1 , r 2 , ..., r 2n and F 2n−2 is a function of the 2n − 2 variables r 1 , r 2 , ..., r 2n except for r α and r β . Φ 0 is the forcing correlation and may be eliminated using the two-point equation. Only the 2nth order moments are considered as by isotropy odd moments vanish. For n = 1 these equations are readily solvable, leading to the exact result For n ≥ 2 the equations are difficult to solve analytically for arbitrary values of ζ h , and to date only certain limits have been treated. The limit of κ → 0, ζ h → 0 (in that order) has been examined perturbatively in [4]. This limit is not realisable in direct numerical simulations due to numerical instabilities caused by small diffusivities; moreover fields with scaling exponents approaching zero become increasingly spatially rough and are very difficult to produce and treat reliably numerically. In [5] the perturbative small parameter was ζ h /d, with d the spatial dimension; which requires either the difficult ζ h → 0 limit or the numerically inaccessible case of large dimension. The regime of ζ h → 2 has also been treated perturbatively in [7]. The only theory which treats the intermediate span of physical fields requires a closure that is not rigorous [3,6], and it is with the prediction arising from this theory that we will be able to make a comparison. Further we test in detail some of the more general scaling predictions afforded by the fusion rules for fluid dynamics developed in [8] and the particular statistical assumptions with respect to conditional statistics utilised in the theory of [3,6] in obtaining predictions for the scaling exponents. The crucial assumption arises in the context of the equation for the nth order structure functions, defined S n (R) = (T (x + R) − T (x)) n : The function J 2n (R) derives from the dissipative term and is given by where One may determine directly that J 2 (R) = 4ǭ, the mean dissipation (independent of R). In order to obtain the scaling exponents ζ n of the nth order structure functions, one needs to evaluate J 2n (R). In light of (4) and the exact result (7) one sees that J 2n must have a scaling form which agrees with This result can be derived without reference to (8) using the fusion rules derived in [6]. In either way the coefficients C 2n are undetermined. Kraichnan proposed that C 2n = 1 for all n. In this case one obtains from (8) a quadratic equation determining the ζ n s: As has been pointed out in [3] this assumption bears a strong relation to the conditional statistics of the Laplacian of the field. One may rewrite J 2n (R) in terms of the average of the Laplacian conditioned on the value of a difference of T across the length scale R, δ R T (x): One way to ensure that J 2n (R) has the scaling (10) is for the conditional average to satisfy Hence a linear behaviour of the conditional average of the Laplacian is intimately connected with the determination of the scaling exponents. The model has been studied by direct numerical simulations in [9] with ζ h = 1. These simulations have been criticised for the method of generation of the velocity field; two fixed scaling fields were swept past each other in orthogonal directions at a constant rate. In doing so one may lose isotropy in a way that can influence the apparent numerical values of the measured exponents. In our simulations we have evolved a scalar field in two dimensions on a 1024 2 grid. The scaling velocity field was implemented by Fourier transforming a set of k-vector coefficients which were each chosen randomly from a Gaussian distribution scaled to a standard deviation proportional to k −1−ζ h /2 . The direction of the kth component u k was chosen such that k · u k = 0. To reduce computation we have used an isotropised version of the method employed in [9]; namely we generate two fixed realisations and shift them with respect to one another in order to obtain rapid variation. At each time step the two fields are independently shifted by a step of random size and direction. The fields are renewed after around every 500 time steps to reduce any temporal correlation that this method might induce. We checked that the results are insensitive to a more frequent refreshment of these fields. The spatial discretisation is second order, and the time evolution was performed using an explicit Euler scheme. The forcing was implemented by stimulating at every time step one of the nine smallest wavenumbers with an amplitude chosen from a Gaussian distribution. Our initial conditions for the scalar field (for a given value of ζ h ) were Gaussian random with the 2nd order scaling exponent distinct from the expected result of 2 − ζ h , and truncated in k space. Typically, saturation to statistical steady state required about thirty million time steps on the CRAY J90. We have converged results for three values of ζ h , i.e. 0.6, 1.0 and 1.2. The diffusivity in every run was chosen to obtain the longest possible inertial range while retaining stability in the small scales. In Fig. 1 we present a typical realization of the scalar field for ζ h = 1.0. It shows significant development of small scale structures. In Fig. 2 we present the structure functions S n (R) as a function of R for the three values of ζ h , computed using spatial averaging over single realizations after statistical stationarity was reached, and then time averaging over one hundred snapshots taken at intervals of ten thousand time steps. This figure shows that we have one and a half decades of scaling, or "inertial range". Figs. 3 displays the dependence of ζ n on n for the three values of ζ h . Also shown is the prediction of Kraichnan for these values. It is evident that for the three parameter values tested we have close agreement. In the figures we display also the odd values for the exponents. These were calculated from the field by taking absolute values; strictly this is not covered by the theory but one sees here that they smoothly interpolate the law for the even orders. We remark that although the grid is relatively small the structure functions display well-developed scaling ranges for orders as high as 12. The relatively good statistics resulted from averaging over many snapshots. We checked however that also the single-time realisations appear to be well self-averaged. Note that for ζ h = 1.0 the agreement between the numerically computed value of ζ 2 and Eq.(7) is best. We believe that the reason for this is simply due to the difficulty of creating a velocity field with precise scaling on a finite grid. It is interesting that in fact the scaling in the passive scalar field appears cleaner than that which can be obtained by the Fourier transform method described above in grids of this size. If we check our apparent real space scaling exponents for the velocity field we find that the minimum error between the input ζ h in k-space and the observed one occurs precisely at ζ h = 1. However the higher order scaling exponents do not seem to be as sensitive to this discrepancy. The quality of the prediction (11) can be independently tested by verifying that the coefficients C n are close to unity, and that the conditional average (13) is indeed linear with the right R-dependent prefactor. To this end we computed from the simulation the quantities J n (R) of Eq.(9). J 2 was confirmed to be constant throughout the inertial range. In Fig. 4 we present J n (R) as a function of nJ 2 S n (R)/2S 2 (R) for n = 2, 4, 6, 8, 10 and inertial range R. The dashed line is the line y = x, and we see that it passes through the data without any adjusted parameter. The coefficients C n were obtained from the data for a range of values of R and n, and were found to be very close to unity in the inertial range, see inset in Fig. 4. Finally we can check the postulated linearity of the conditional average (13). These quantities were calculated for a range of R values in the inertial range by averaging over several directions of R. The results are displayed in Fig. 5. Our conclusions from these simulations are that the postulates that lead to the prediction (11) for the scaling exponents (i.e linear conditional averages, C 2n = 1) are very well supported by the numerical data. As a result it is no surprise that the measured scaling exponents agree very closely with their predicted values. Due to the limitations of the computational techniques one cannot of course state that precise agreement is observed. It is our conviction however that the conditional average is very close to being linear; a persistent failure to prove the linearity mathematically may indicate that this property is not exact. It seems however very worthwhile to probe this question further to understand the close agreement between simulations and (11).
2019-04-19T13:11:24.014Z
1997-07-01T00:00:00.000
{ "year": 1997, "sha1": "3e3a87a0831dc592f316935d23cf5dc79347559c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "46e387e678a50e3c5a9da8fc41d8305918dd1233", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
230609896
pes2o/s2orc
v3-fos-license
The Effects of Myrtle (Myrtus communis) and Clindamycin Topical Solution in the Treatment of Mild to Moderate Acne Vulgaris: A Comparative Split-Face Study Objectives Although Acne vulgaris is a chronic skin disease, which its standard treatment causes therapeutic limitations and some common adverse effects, medicinal plants can be effective in treatment with low adverse effects as combination therapy. Myrtle (Myrtus Communis) has some beneficial properties, which has been administered topically and orally for some skin diseases in Persian medicine. This study aimed to compare the efficacy and safety of Myrtle formula and 1% clindamycin topical solution. Methods This was a split-face clinical trial that was done on 55 patients with mild to moderate acne vulgaris for 16 weeks. The patients received topical Myrtle solution to the right side of the face (group 1) and clindamycin solution to the left side (group 2) twice daily for 12 weeks. All participants were examined for the acne severity index (ASI) and total acne lesions counting (TLC) at certain times during the study. Then, they stopped using them for four weeks. They also did not take the drug in the final four weeks of the study. Results Forty-eight patients completed the study for 16 weeks; 40 (83.2%) patients were female and the rest of them were male. The mean age and standard deviation were 25.62 ± 7.62 years. After 12 weeks, the percentage changes of comedones, inflammatory lesions, ASI and TLC were significantly reduced in both groups (p < 0.001). The percentage change of inflammatory lesions and ASI decrease was significantly higher in the group 1 (p = 0.03). There was no significant difference in the incidence of side effects between the two groups. There was a more significant decrease in sebum percentage change in the group 1 (p = 0.003). Conclusion Myrtle lotion was effective and safe for the treatment of mild to moderate acne vulgaris. INTRODUCTION Acne Vulgaris is a chronic inflammatory disease of the sebaceous glands, which is one of the most prevalent skin diseases among adolescents and young people worldwide. Its prevalence MATERIAL AND METHODS This study was a triple blind, non-randomized, split-face clinical trial. All the patients were recruited from three university centers in Tehran, Iran, from June 2017 to April 2019. The inclusion criteria were both the men and women, aged between 12 to 45 years old, with mild to moderate acne vulgaris on their faces. The characteristics of the lesions also include TLC score: 20-140, non-inflammatory lesions: 10-50, inflammatory lesions: 10-50, and lack of sinus tract, cysts, and nodules. Patients were excluded if they had a contemporary skin disease such as scar, rosacea, psoriasis that interfere with the assessment of acne lesions, acute systemic diseases, pregnancy, breast-feeding, the consumption of local treatment of acne in two months before or during the study, taking oral retinoic acid in six months before the study, and allergy to the drugs or their compounds. The Ethics Committee of Iran University of Medical Sciences approved this study (Ethics code: IR. IUMS FMD.REC 1396.9321309010). Informed consent was taken from each participant. The study was registered in the Iranian Registry of Clinical Trials (registration no. IRCT 20171122037581N1). Plant material and drug preparation The required amount of Myrtle dried leaves were purchased from the herbal market in Tehran, Iran. The herbarium code of PMP-447 was obtained from the botany lab and herbarium of faculty of pharmacy, Tehran University of Medical Sciences, Tehran, Iran. The plant leaves were ground to powder using an electric miller. The ethanolic extract of Myrtle was prepared by maceration method using ethanol/water solvent. For this reason, 100 g of grounded powder was macerated with 500 mL of EtOH: H 2 O (80: 20) for 48 h. Sample size To estimate the sample size, data were derived from a pilot study of 10 subjects. It was calculated using the comparison of the formula for the means between matched pairs groups, on the basis of following assumptions: α = 0.05, p = 80%, effect size = 0.6 and the attrition rate of 20%, the sample size was calculated to be 55 patients. Study design and population Fifty-five patients who met the inclusion criteria were enrolled in this study. They received topical Myrtle solution on the right side and clindamycin 1% solution on the left side of their faces twice daily for 12 weeks. After completing active therapy, the patients were drug-free for four weeks to follow up. During the study, the patients, the researchers and the statistical analyzer were unaware of the contents of the treatment. The Myrtle and clindamycin 1% solution were administered for each patient in two opaque containers, both drugs are the same in their shapes and odors. Patients were visited and the treatment sites evaluated at the beginning of the study and then after 6, 12 and 16 weeks. The following skin biophysical characteristics were measured on both sides of the face at the baseline and the end of the study on 16th week: Fluorescence photography was conducted usingVisiopor ® PP 34 camera (Courage-Khazaka, Germany) with narrow-band UVA light (375 nm) and image size of 6.4 × 8 mm. Photographs were taken from the left and right cheek. The number of orange-red fluorescence spots were assessed. The orange-red fluorescence indicates the presence of Propionibacterium acne (P. acne) within clinically non-evident (follicular impactions and microcomedones) and clinically evident lesions (comedones, papules and pustules). Four skin biophysical characteristics were measured using MPA-9 (Courage-Khazaka, Germany). The erythema index and melanin index were measured using Mexameter MX 18. Stratum corneum hydration and sebum content, were measured by Corneometer CM 825, Sebumeter SM 815, respectively. All measurements were performed on the left and right cheek in a room at 20-25℃ temperature and a relative humidity of treatment. Endpoints The efficacy end point was the comparison of the inflammatory, non-inflammatory, TLC and ASI and percent change of them at baseline and week 12. Patient's satisfaction was scored from 0-4 (i.e. 0 = worse, 1 = no response, 2 = poor response, 3 = good response, 4 = excellent). 50% recovery was compared on both sides; drugs safety was measured through evaluating sideeffects. The evaluation of side effects was based on "common terminology criteria for adverse events V4. 0 2009". Statistical analysis Data were analyzed using SPSS software (SPSS Inc. Version 17.0. Chicago, IL, USA). Variables were described by the mean, standard deviation. Normal distribution of variables was evaluated using Kolmogorov-Smirnov and Shapiro-Wilk tests. Friedman's test was used to determine the changes in the variables during the study. Wilcoxon matched-pairs signed ranks test was used to compare the variables between the right and left side of the face and pairwise comparisons on each side. The qualitative variables were compared between two sides of the face using McNemar's test. The p-value of less than 0.05 was considered statistically significant. Baseline characteristics A total of 83 individuals with mild to moderate acne vulgaris were interviewed in 3 university centers of whom, 55 subjects were enrolled to the study. These patients referred to one cen- At baseline, there were no significant differences among the subjects in terms of the mean TLC (p = 0.28) and comedone score (p = 1), but the mean number of inflammatory lesions (p = 0.004) and ASI (p = 0.02) was significantly higher on the right side compared to the left ones. Table 1 presents a comparison of the skin lesions, TLC, ASI and their changes in two sides. Fig. 1 presents percent changes in comedones, inflammatory lesions, TLC and ASI at 16 th week compared to the baseline. Efficacy on total lesion scores (TLC) At the beginning of the study, the mean scores of TLC on the right and left sides of the face were 34.1 ± 17.48 and 31.83 ± 13.04, respectively (Table 1). There was no significant difference between two sides in terms of decrease in the mean TLC in the 6 and 12 weeks (p = 0.31 and p = 0.07, respectively). In the 16 th week, the subjects were reported increased TLC on both sides of the face compared to the 12 th week of the study, however, percentage changes were significantly higher on the right side (p = 0.01). In both sides there was a significant decrease in TLC at the 12 th and 16 th weeks. Percentage change in TLC showed no significant difference at 12 th week compared to baseline between the two sides (p = 0.1). Efficacy on non-inflammatory lesion (comedone) scores In the 6 th and 12 th weeks of the study, the number of comedones were significantly decreased on both sides compared to the baseline (p < 0.001). In the 12 th week, there was no significant difference in percentage changes of comedones between the two sides compared to the baseline (p = 0.64). In the 16 th week, the mean number of comedones was higher on the left side (p = 0.01). Efficacy on inflammatory lesion scores At baseline, the mean number of inflammatory lesions was significantly higher on the right side (p = 0.004). In the 12 th week, there was a significant decrease in the mean number of inflammatory lesions on both sides (p < 0.001), but on the right side, the inflammatory lesion percentage was significantly decreased more than the left side (p = 0.03). Efficacy on acne severity index (ASI) ASI was significantly higher on the right side compared to the left side at baseline conditions (p = 0.02). In the 6 th and 12 th weeks, the decreasing of ASI was equal on both sides (p < 0.001). The significant change in ASI seems obscure. Thus, it is necessary to compare the percentage changes of ASI in both sides in order to properly explain the results. After 12 weeks, there was a significant decrease in ASI percentage change for both sides compared to the baseline, but ASI percentage decrease on the right side was significantly higher (p = 0.03). The percent changes of TLC and ASI in the 6 th and 12 th weeks are shown in Table 2. Skin biophysical characteristics There was a significant difference in melanin levels between the two sides at baseline (p = 0.04), and their percentage changes were not significant after 16 weeks (p = 0.49). At 16th week, there was a significant increase in moisture level with clindamycin, while it was decreased with Myrtle, nevertheless, there was no significant difference in percentage change (p = 0.07). After 16 weeks, sebum level was significantly lower with Myrtle (p = 0.009). Also, the percentage decrease with Myrtle was significantly higher than clindamycin (p = 0.003). Erythema increased on the side treated with Myrtle, how- ever, it decreased with clindamycin at 16 th week, which the difference was significant (p = 0.02). However, the percentage changes of erythema between both formulations were not significant (p = 0.46). The percentage changes of P. acne (Visiopor) decreased for both products, but their differences were not statistically significant (p = 0.8). Table 3 demonstrates skin biophysical charac-teristics in the study population. Recovery The recovery was considered a 50% decrease in ASI at 6 th and 12 th weeks. It was observed at 6 th week with Myrtle in 22 patients (45.8%) and with clindamycin in 13 patients (27.1%), which was significantly higher for Myrtle (p = 0.04), but at 12 th week, the rate of recovery was not significantly different between both sides of the face (p = 0.14). Table 4 shows the number of patients experiencing any AEs. Eighteen patients (37.6%) with Myrtle and 21 subjects (43.8%) with clindamycin reported at least one AEs after six weeks. It was decreased to 13 patients (26.2%) for Myrtle and 19 subjects (39.7%) for clindamycin at 12 th week. The severity of symptoms on both sides of the face were mild to moderate, and none of them was severe. Scabbing, was the most common AEs for both Myrtle and clindamycin at the 6 th and 12 th weeks with no significant difference (p = 1 and p = 0.65, respectively). The second common AEs for both products were an aggregation of lesion (p = 0.73) in the 6 th week and dryness (p = 0.08) in the 12 th week. Adverse events The results showed no significant difference in the incidence of side effects between the two formulations. Patient's satisfaction The participants reported satisfaction with treatment as follows: median interquartile range (IQR) was between 3 (2-3.75) and 2 (2-3) for Myrtle and clindamycin respectively. The satisfaction was significantly higher for Myrtle (p = 0.02). Recurrence In both formulations, the percent changes of TLC, ASI, inflammatory and non-inflammatory lesion were increased at 16 th week compared to 12 th week. The recurrence rate was not significantly different between them (Table 2). Fig. 2 and 3 show photograph of one subject who received the Myrtle (right side of the face) and clindamycin (left side of the face) in the baseline and end of the study (at 16 th week). DISCUSSION The aim of this split-face study was to compare the efficacy and safety of Myrtle and clindamycin solution in patients with mild to moderate acne vulgaris on their faces. In this study, the decrease in the inflammatory and non-inflammatory lesions, ASI and TLC percentage changes were significant on both sides of the face at 12 th week. There was a significant difference in the reduction of inflammatory lesions and ASI percentage changes for Myrtle compared to clindamycin. The significant decrease of sebum, as a precursor of acne, was one of the most outstanding features of Myrtle lotion. The patient's satisfaction for Myrtle was significantly higher than the clindamycin and the incidences of AEs between both formulations were similar. Acne vulgaris is a multifactorial skin disease and some of its aspects remain unknown yet. Four key factors have been identified in the pathogenesis of acne, including increased sebum production, follicular hyperkeratinization, colonization of the pilosebaceouse unit with P. acnes and in flammation [13]. Its medical treatment is based on the effect of one or several factors to prevent the development of the lesion by inhibiting sebum growth, normalizing follicular hyperkeratinization, decreasing P. acnes colonization and inhibition of inflammation [14]. Releasing free radicals and inflammatory mediators reflect the presence of oxidative stress state in parts of the pathogenesis of acne, therefore, the use of antioxidant and anti-inflammatory agents has been considered along with usual medication for the treatment of acne vulgaris [15]. It is believed that increased production of keratinocytes and subsequent retention can be considered a comedogenic factor [16]. An in vitro study conducted on ethanol product of Myrtle leaves (Myrtacin) have demonstrated anti-proliferative activities on human keratinocytes. Myrtacin inhibits keratinocyte proliferation by 27% and 76% at 1 and 3 μg/mL, respectively. The effective antiproliferative compounds of Myrtacine were Myrtucommulone A and B [9]. Colonization of the pilosebaceouse unit with Staphylococcus aureus, Staphylococcus epidermidis and P. acnes is another key factor in the pathogenesis of acne [17]. Myrtle has strong antimicrobial activity because of high content of monoterpene hydrocarbons such as α-pinene, limonene, linalool, eucalyptol and terpineol [18][19][20]. The bactericidal activity of the ethanol product of Myrtle leaves (Myrtacin) and myrtucommulone A and B against erythromycin-sensible and resistant P. acnes strains was determined by measuring the MIC and D value. The extracts inhibited P. acnes strains growth with MICs of 4.9 μg/mL and 2.4 μg/mL, respectively. Myrtucommulone A and B also showed inhibitory activity against both strains (MICs of 1.2 μg/mL and about 0.5 μg/mL, respectively). The extract also exhibited a concentration-dependent antilipase activity [9]. P. acnes produces lipases, proteases, and hydrolases, contributing to inflammation [22]. Several studies showed anti-inflammatory properties of essential oil, aqueous and ethanolic extract of Myrtle in animal model [23]. Anti-inflammatory effect of Myrtle also related to the Myrtucommulone (MC) and a lesser extent of Semi-Myrtucommulone (S-MC) and nonprenylated acylphloroglucinols. This is due to their ability to suppress the biosynthesis of eicosanoids by direct inhibiting cyclooxygenase-1 and 5-lipoxygenase in vitro and in vivo. They also have a restraining effect on synthesis of ROS species and release of elastase as the initiating factor of inflammation [24]. Releasing free radicals and inflammatory mediators reflect the presence of oxidative stress state in parts of the pathogenesis of acne, Oxidative stress is initiated by reactive oxygen species (ROS). Three free radicals (hydroxyl, superoxide and nitrous oxide) are responsible for the occurrence of irritation during the acne infection; therefore, the use of antioxidant and antiinflammatory agents has been considered along with usual medication on the treatment of acne vulgaris [15,25,26]. Antioxidant activity and total phenolic compounds of four extracts (water, methanol, ethanol, and ethyl acetate) of Myrtus communis were measured. The methanol and water extracts possess significant antioxidant activities. This order is observed in both leaf and berry extracts. They showed that it is a rich source of phenolic content, which has been reported as an active antioxidant component. There is also a linear correlation between the phenolic content and antioxidant activity [27][28][29]. Clinical studies have also been performed to evaluate the effect of different topical Myrtle products in patients with acne, which confirm the results of in vitro and in vivo studies. A prospective, randomized, parallel-group study was conducted on 164 patients with mild to moderate acne, who previously developed a retinoid dermatitis, in which, one group of patients received 0.2% Myrtacine and 4% nicotinamide, and the second group treated with a moisturizer. Patients treated with the Myrtacine / nicotinamide combination showed a statistically significant improvement in symptoms (pruritus, stinging and burning sensation) and signs (erythema, dryness and oedema). This good result also observed in patient with nodular acne [30]. A clinical study was performed with products containing Myrtacine on erythromycin-resistant strains of cutibacterium. acnes. Sixty patients with global acne severity evaluation (GEA) scale who had GEA grades 2 and 3 acne were treated with Myrtacine-based dermocosmetic twice daily for 8 weeks. At baseline, antibiotic-resistant strains of cutibacterium. acnes were detected in 38 patients. Global cutibacterium. acnes popu-lation counts were stable at the end of the study, however, there was a significant reduction of erythromycin-resistant strains of cutibacterium. acne. There was also a significant reduction of inflammatory and non-inflammatory lesions and acne severity after 8 weeks [31]. In another study on 20 Korean women with acne vulgaris for 6 weeks, it is clinically proved Myrtle essential oil has beneficial effects. The acne grades significantly decreased in the Myrtle group. The pore index, the erythema index, the sebum index, microorganism index and the desquamation index also decreased in the group in a statistically significant manner. In the control group with no Myrtle, the acne grades and the microorganism index a little decreased, but was not statistically significant, while the pore, erythema, sebum and desquamation indices rather increased to some extent [11]. Our results are consistent with those of those previous clinical trials on Myrtle products. In our study, erythema index was not reduced, which differed from the latest study findings. This difference may be related to the duration of the study (12 weeks versus 6 weeks) and the type of Myrtle product (ethanolic extract versus essential oil). We believe our study duration was more appropriate. One of the common findings between our study and the latest study is a decrease in sebum index. The number of P. acne that measured by Visiopor showed a significant decrease in our study. The effectiveness of Myrtle formula was significant on the assessment of ASI, TLC, inflammatory and non-inflammatory lesion. No withdrawals owing to adverse effect of Myrtle were reported in our trial. The efficacy and good tolerance of Myrtle formula demonstrated in our study were consistent with the results of previous studies. The small sample size was the main limitation of this study. Other studies with larger sample size and higher dosage of Myrtle can be designed to investigate its effects on acne treatment. CONCLUSION This study showed a significant decrease in the inflammatory and non-inflammatory lesions, ASI and TLC percentage changes on both sides of the face at 12 th week. There was a significant difference in the reduction of inflammatory lesions and ASI percentage changes for Myrtle compared to clindamycin. The results showed that Myrtle is as effective as clindamycin in treating mild to moderate acne. The significant decrease of sebum, as a precursor of acne, was one of the most outstanding features of Myrtle lotion. The patient's satisfaction for Myrtle was significantly higher than the clindamycin and the incidences of AEs between both formulations were similar. Myrtle can be used as a natural drug beside other standard topical drug in the treatment of acne vulgaris to improve clinical efficiency and reduce possible side effects.
2020-12-31T09:12:22.592Z
2020-12-31T00:00:00.000
{ "year": 2020, "sha1": "af54fd1da45f67039da439d3936c00aeabd7d04f", "oa_license": "CCBYNC", "oa_url": "https://www.journal-jop.org/journal/download_pdf.php?doi=10.3831/KPI.2020.23.4.220", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5bdedbc01a27234ee4949bbfb950b15725a96959", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
3231992
pes2o/s2orc
v3-fos-license
Clear cell adenocarcinoma of urinary bladder: A case report and review Case Clear cell carcinoma is an uncommon but distinct variant of urinary bladder carcinoma histologically resembling the neoplasm in the female genital tract. The histogenesis of this neoplasm is uncertain. The clinicopathologic and histologic features are suggestive of a mullerian origin in some tumors, while some believe it to be glandular differentiation of urothelium or a unique vesicular adenocarcinoma of non-mullerian origin. [1] We present a case of clear cell in a 74-year-old woman with review of literature along with its differential diagnosis. INTRODUCTION Clear cell adenocarcinoma (CCA) of the urinary bladder is a rare malignancy with only 41 cases reported in the English literature to date. [1] It usually arises from the female genital tract. Its cytologic and ultrastructural features are similar to those of CCA arising in female genital tract to suggest its mullerian differentiation. [2] CCA is sometimes found in the lower urinary tract in women, most commonly involving the urethra, where it may arise in paraurethral ducts or diverticula. In males the possible theory of origin is glandular differentiation of urothelium. [3] CASE REPORT A 74-year-old was woman admitted with complaints of obstructive and irritative lower urinary tract symptoms for last 2-3 months with a positive history of poor flow, prolonged voiding, thin stream, intermittency, post void dribbling, urgency, urge incontinence and increased frequency. There was no history of hematuria, pyuria, flank pain or any instrumentation. She was diagnosed as hypertensive 10 years ago and was stable on medicines. There was no prior history for any surgery. Clinical examination of patient was fair and systemic examination was normal. On ultrasonography, bladder mass measuring 3.6 cm × 3.2 cm was seen. On cystoscopy, bladder neck was markedly narrowed and occupied by an extensive broad based tumor measuring 4 cm × 3 cm × 2 cm involving both anterior and posterior walls. Bilateral ureteral orifices were normal. Transureteral resection of bladder tumor was done. The tumor was in multiple fragments, which together measured 5 cm × 4 cm × 0.5 cm. On microscopic examination, the tumor showed a prominent micropapillary, tubulocystic, and glandular pattern [ Figure 1]. The papillae were broad and showed extensive myxoid change in the fibrovascular core [ Figure 2a]. Many of the cells showed apical snouting [hobnail pattern, Figure 2b]. They had eosinophilic to vacuolated cytoplasm, which stained diffusely with Periodic acid Schiff 's stain [PAS, Figure 2c]. Tumor cells showed moderate to marked nuclear atypia and only few mitoses were recognized. Basophilic material was present in lumen of many of the tubules. At places, cribriform pattern was observed [ Figure 2d]. Focal infiltration into the muscularis propria was also observed. Based on the morphological and immunohistochemical findings, a final diagnosis of clear cell adenocarcinoma of urinary bladder was made. Patient was given intravesical mitomycin and subsequently radical cystectomy was performed. Examination revealed a However, all the margins were free of tumor. At six months follow-up, patient was reported to be doing well. DISCUSSION CCA of urinary tract is rare with only sporadic cases reported in the literature. Till date 41 cases have been reported. [1] The histogenesis of CCA in urinary bladder is still unclear. Most information has been gained from single case reports and small case series. [4] They were originally categorized as mesonephric adenocarcinoma by Konnak in 1973. [5] Later Young and Scully in 1985 introduced the term CCA for these tumors, which has histologic resemblance to the CCA of female genital tract of mullerian origin. [6] CCA of urinary bladder occurs mostly in women, which also supports of a mullerian origin of this tumor. Few authors believed CCA as a morphologic expression of urothelial carcinoma with glandular differentiation. [3] In a study conducted by Olivia, nine of thirteen CCA tumors either had minor foci of conventional urothelial carcinoma or foci resembling neoplastic urothelial cells. [3] The main differential diagnoses [ Table 1] of this tumor is nephrogenic adenoma, urothelial carcinoma with clear cell change and metastasis of CCA from ovary and kidney. [2] CCA ranges from 1 to 7 cm and most present as polypoidal or papillary masses in trigone region a c b of urinary bladder as seen in our case. Glycogen rich clear cells are a hallmark of CCA. [3] Solid, papillary, and tubulocystic areas are common and all of these are partially lined by hobnail cells. Cytologic atypia is usually moderate to severe and mitoses are readily apparent. [2] In the present case, atypia was marked but mitoses were few. In nephrogenic adenoma-like CCA, usually papillary, polypoidal, or sessile structures are encountered. Microscopically, it is composed of small tubules and cysts lined by a single layer of cuboidal, low columnar, or hobnail cells with scant cytoplasm and bland nuclei. [2] Mitotic figures are rare. [7] Urothelial carcinoma with clear cell change may resemble CCA, but its architecture is less variable and it lacks hobnail cells. [8] Clear cell renal cell carcinoma metastatising to bladder is rare, approximately 20 cases have been reported in literature. [9] Histologically, clear cell renal cell carcinoma is architecturally less variable, contains a delicate fibrovascular core, and does not have hobnail cells. Renal cell carcinoma is positive for low molecular weight cytokeratin and vimentin, while CCA is negative for vimentin. Clear cell myelomelanocytic tumor of urinary bladder consists of nests of clear to eosinophilic epithelioid cells with delicate vascular stroma. [10] Tumor cells are positive for HMB 45 and smooth muscle actin, while cells in CCA are negative for these markers. [10] CONCLUSION CCA is a rare tumor of the urinary bladder with distinctive features. The histogenesis is still controversial. It mostly occurs in women resembling its mullerian counterpart, however, cases reported in men suggest a glandular differentiation (metaplasia) in urothelium/urothelial carcinoma. Unlike urothelial carcinoma, it responds poorly to chemotherapy or radiotherapy. Radical cystectomy offers best chance of long term survival.
2018-04-03T04:50:40.663Z
2011-09-01T00:00:00.000
{ "year": 2011, "sha1": "c4ad1c6e0fde3d780494f4b9c5c2da4bd007c99e", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/0974-7796.84962", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "67fe842bd50ae4d550d43a4e1dfe438e161131f8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
267197860
pes2o/s2orc
v3-fos-license
Exploring biorefinery alternatives for biowaste valorization: a techno-economic assessment of enzymatic hydrolysis coupled with anaerobic digestion or solid-state fermentation for high-value bioproducts ABSTRACT Enzymatic hydrolysis of organic waste is gaining relevance as a complementary technology to conventional biological treatments. Moreover, biorefineries are emerging as a sustainable scenario to integrate waste valorization and high-value bioproducts production. However, their application on municipal solid waste is still limited. This study systematically evaluates the techno-economic feasibility of the conversion of the organic fraction of municipal solid waste (OFMSW) into high-value bioproducts through enzymatic hydrolysis. Two key variables are examined: (a) the source of the enzymes: commercial or on-site produced using OFMSW, and (b) the treatment of the solid hydrolyzate fraction: solid-state fermentation (SSF) for the production of biopesticides or anaerobic digestion for the production of energy. As a result, four different biorefinery scenarios are generated and compared in terms of profitability. Results showed that the most profitable scenario was to produce enzymes on-site and valorize the solid fraction via SSF, with an internal rate of return of 13%. This scenario led to higher profit margins (74%) and a reduced payback time (6 years), in contrast with commercial enzymes that led to an unprofitable biorefinery. Also, the simultaneous production of higher-value bioproducts and energy reduced the economic dependence of OFMSW treatment on policy instruments while remaining energetically self-sufficient. The profitability of the biorefinery scenarios evaluated was heavily dependent on the enzyme price and the efficiency of the anaerobic digestion process, highlighting the importance of cost-efficient enzyme production alternatives and high-quality OFMSW. This paper contributes to understanding the potential role of enzymes in future OFMSW biorefineries and offers economical insights on different configurations. Introduction In recent years, the growing environmental concern and the energy crisis have accelerated the development of innovative waste management and treatment technologies.Particularly, the increasing generation of municipal solid waste (MSW), primarily composed of organic waste, has prompted society to seek resources and energy recovery from waste [1].The organic fraction of municipal solid waste (OFMSW), commonly referred to as biowaste, is composed of food residues and green waste, and it is suitable for valorization through biological treatments [2]. As separate collection systems for OFMSW expand, both the quality of OFMSW and its collection costs increase [3,4].Therefore, it is becoming increasingly apparent that waste valorization alternatives beyond well-established and robust methods, such as composting and anaerobic digestion, are necessary to maximize profitability.In this regard, biorefinery-like configurations appear as an alternative to the traditional OFMSW treatment plants [5].Biorefineries are sustainable bioprocessing facilities that optimize revenue generation from the original feedstock while also reducing impacts on natural resource consumption [6].This is achieved by integrating different conversion methods to produce multiple marketable bioproducts [1].According to the cascading principle, added-value products should be produced first followed by energy generation [7].Currently, the preferred configuration for sourceselected OFMSW treatment plants in Europe is anaerobic digestion for biogas production, which is used to generate electricity and heat, followed by composting to stabilize the digestate and produce compost [8].However, within an OFMSW biorefinery scheme, processes that fractionate or convert complex organic matter into a wide variety of bioproducts would come before anaerobic digestion.By doing so, biorefineries improve the sustainability of waste management in line with EU circular economy policies [9]. Several biorefinery configurations have been proposed to convert the OFMSW into value-added products such as biosurfactants, sugar syrups, bioethanol, succinic acid, lactic acid and biopesticides [10][11][12][13][14][15] and in-depth reviewed elsewhere [4,5].A common trait among many of these is the use of enzymatic hydrolysis to fractionate the complex OFMSW macromolecules into functionalized molecules, which act as building blocks for subsequent steps [5,16].Comprising 45-85% of the OFMSW composition [16], carbohydrates and fibers serve as a source of fermentable sugars, capable of being converted into bio-based products via fermentation processes [13].However, these studies are mostly focused on the liquid fraction rich in sugars and either disregard the solid fraction or consider it a waste and direct it to anaerobic digestion.Therefore, further research is needed to integrate the solid hydrolyzate fraction into the overall valorization pathway of biorefineries. Our recent study demonstrated the utilization of the solid fraction remaining after enzymatic hydrolysis for Bacillus thuringiensis (Bt) biopesticide production through solid-state fermentation (SSF) in a 22 L bench-scale bioreactor [17].A final concentration of 4 × 10 8 spores of Bt per gram of dry matter was obtained by mixing the solid hydrolyzate with solid digestate, which has been also demonstrated as a suitable substrate for biopesticides production through SSF [18,19].SSF technology is attracting increasing interest due to its low water and energy requirements without compromising yield [20]. Biopesticides are biological agents that offer a promising alternative to chemical pesticides for the control of pests in agriculture [21].The global market for biopesticides is growing at an annual rate of 15% [22], driven by rising awareness of the environmental and health risks of chemical pesticides, increased demand for organic and sustainable agricultural products, and government regulations that favor the use of biopesticides [21].This market is dominated by biopesticides derived from Bacillus thuringiensis, which represent over 80% of the global biopesticide market [23]. OFMSW stands out as a unique feedstock given its inherent variability, heterogeneity, complex structure, and indigenous microbial consortium.The robustness and efficiency of anaerobic digestion technology for the conversion of OFMSW into biogas and biomethane has positioned it as an essential foundation upon which to build an OFMSW biorefinery [24].Furthermore, biogas can be converted into electricity and heat or upgraded to be pumped into the natural gas grid, converting it into an economically compelling choice.Therefore, novel integrated OFMSW biorefineries should built on existing treatment facilities using anaerobic digestion. The OFMSW biorefinery configuration proposed in this study is based on the use of enzymatic hydrolysis to obtain a sugar syrup and SSF to convert the subsequent solid hydrolyzate together with solid digestate into a solid-state biopesticide.However, it is well known that the main obstacle in implementing enzymatic hydrolysis at an industrial scale is the high cost of commercial enzymes [25], considering that the complex nature of OFMSW hampers enzyme immobilization or recovery.To avoid the market cost of enzymes, their production could be integrated into the biorefinery.Fungal species have been widely exploited in SSF processes due to their enzyme battery and their ability to grow on solid substrates [20].Table 1 summarizes different types of enzymatic activities that have been produced by SSF using organic wastes.The inoculum employed mainly belongs to the fungal genus of Aspergillus or autochthonous microbiota of waste materials.Specifically using OFMSW as substrate, Ladakis et al. [13] proposed on-site production of crude enzymes within an OFMSW biorefinery dedicated to succinic acid production.This approach involved less than 20% of the total capital investment and reduced processing costs.However, it is important to evaluate whether the reduced operating cost compensates for the additional capital investment required.Another major economic advantage of producing enzymes on-site is the reduced formulation requirements as long-term storage and transportation are avoided and the formulation can be adjusted to the immediate application, reducing downstream purification steps required in more complex processes [5,13]. In this study, the economic performance of an OFMSW biorefinery integrating enzymatic hydrolysis into an anaerobic digestion treatment plant is explored.Process design, techno-economic assessment and investment profitability have been systematically studied to show the potential of enzymatic hydrolysis for the treatment of OFMSW.Four scenarios are proposed to evaluate the allocation of the solid hydrolyzate for biopesticide production by SSF or for energy production by anaerobic digestion, as well as to study the origin of the enzymes, whether they are commercially sourced or produced on-site.The aim is to compare these scenarios and identify the biorefinery configuration that is most profitable for potential implementation.Currently, this is one of the few articles evaluating in detail the role of enzymatic hydrolysis from a techno-economic perspective in future OFMSW biorefineries. Simulation description The proposed biorefinery was simulated with a processing capacity of 300 t per day of sourceseparated OFMSW, which has been assumed to contain a high organic matter percentage and be free of inert materials.The simulation begins at the gate of the biorefinery, so the cost of waste collection or transportation has not been considered.The plant has been located in Barcelona (Spain) with a 20-year lifetime, including three years of construction and the start-up phase.It operates 333 days per year, corresponding to an annual processing capacity of 100,000 tonnes, which is equivalent to a population of approximately 507,000 inhabitants based on the average waste generation rate in Catalonia [30].Mass and energy balances of the processes and all economic calculations have been conducted with Microsoft® Office Excel (version 2019). Process description The main bioconversion process presented in this study has been previously demonstrated at laboratory and bench scales, providing the technical data needed to perform the simulation [17,31].When required, the experimental data was complemented with data from literature, as indicated.The biorefinery configurations proposed are based on three main technologies, namely enzymatic hydrolysis, solid-state fermentation and anaerobic digestion.Depending on the allocation of the solid enzymatic hydrolyzate and the origin of the enzymes, four scenarios are proposed to investigate alternatives for implementing enzymatic hydrolysis in the treatment of OFMSW.As shown in Figure 1, Scenarios I and II evaluate the use of commercial enzymes, with Scenario I using the solid hydrolyzate in SSF for biopesticide production and Scenario II in anaerobic digestion for energy production.Meanwhile, scenarios III and IV explore the use of on-site produced enzymes with analogous solid hydrolyzate allocation strategies.The main parameters used for developing the process simulations are summarized in the supplementary material (Table S1). Pretreatment area The process begins with a primary shredder to reduce the size and homogenize the OFMSW.Then, the shredded OFMSW is split into 70% into the current energy valorization line via anaerobic digestion and 30% into the novel enzymatic hydrolysis valorization line.This ratio remains the same for all four scenarios.For the enzymatic hydrolysis route, an autoclaving step has been introduced to minimize the microbial load.It also serves as a mild hydrothermal pretreatment for lignocellulosic materials, increasing cellulose accessibility to enzymes while generating fewer inhibitory compounds than harsher pretreatment methods [12].A 20 t batch size has been assumed based on a previous study, in which the authors used a pilot scale autoclaving system for MSW that required 12 kWh of electricity, 76 kWh of natural gas and 245 L of water per ton of waste [12]. Enzymatic hydrolysis The enzymatic hydrolysis to extract sugars from the OFMSW is performed in a closed tank at a 10% solid-to-liquid ratio (w/v), which is adjusted by adding distilled water and considering that the moisture content in the autoclaved OFMSW is 75% [31,32].The commercial enzymatic cocktail selected is Viscozyme L® and it is dosed at 0.08 mL g −1 of dry OFMSW, as optimized in previous works [31].The mixture is then heated to 50°C for 24 h.Two hydrolysis tanks operate in batch mode, considering a total operation time of 48 h to account for the time for loading and unloading the tanks.After hydrolysis, a concentration of 50 g L −1 of reducing sugars is achieved in the liquid fraction [31,33].This liquid fraction is recovered by a decanter centrifuge assuming that it accounts for 74% of the initial fresh substrate mixture [17].Finally, water is evaporated in a drying step until achieving a final concentration of 50%, which is acceptable for mixed sugar syrups from lignocellulosic materials [34].The solid hydrolyzate fraction recovered in the centrifuge is used as a substrate for SSF (Scenarios I and III) or, alternatively, for anaerobic digestion (Scenarios II and IV) (Figure 1).To simulate the energy demand for heating the mixture to the hydrolysis temperature, a heat capacity of 3.3 kJ kg −1 K −1 was calculated using the correlation presented by Manjunatha et al. [35] for MSW considering a moisture content of 75%.The concentration step is performed by a flash bed dryer with an energy consumption of 3 MJ kg −1 of evaporated water [36]. Solid-state fermentation for biopesticide production In Scenarios I and III (Figure 1) the solid hydrolyzate obtained after the enzymatic hydrolysis is used to produce a fermented solid with biopesticide activity through SSF.In an SSF bioreactor, the solid hydrolyzate is mixed with pasteurized solid digestate and wood chips that act as bulking agent, in a 1:1:0.5 ratio (w/w) [17].Then, the substrate mixture is inoculated with Bacillus thuringiensis var israelensis, which has been previously grown in liquid media in a seed bioreactor.To reach an initial concentration of 10 7 viable cells g −1 of dry matter, around 25 L of inoculum per ton of substrates are required assuming an inoculum concentration of 10 8 viable cells mL −1 [17].The process begins when the mixture is forcefully aerated with 250 m 3 of compressed air per ton per day and lasts for six days, considering one day for loading, unloading and cleaning the bioreactor [17].After 5 days the fermented solid is sieved to recover the bulking agent, assuming a 90% recovery efficiency (personal communication with waste treatment plant manager). To simulate the energy consumption for the SSF process, it has been assumed that no heating is required, as heat accumulates in the solid matrix during the fermentation course [17].Indeed, temperature is controlled by introducing cold air.Using a conversion factor of 396 KJ m-3 , the air supply is transformed into the total energy consumed, resulting in a specific energy consumption of 27.5 kWh t −1 of substrate [37].For the inoculum preparation, a power consumption of 4 kW m −3 is considered [38].Lastly, for the sieve, an electric power of 40 kW is assumed (Terra select T4 ®). Anaerobic digestion The majority (70%) of the incoming OFMSW into the plant is processed through anaerobic digestion to produce energy, as well as the solid hydrolyzate produced after enzymatic hydrolysis in Scenarios II and IV (Figure 1).For the simulation of the anaerobic digestion process, data from a local municipal OFMSW treatment plant has been used [39].The grinded OFMSW, which has a dry matter content of 25% [17], is mixed with water until a 15% dry matter content is achieved.Mesophilic anaerobic digestors of 3000 m 3 , with a hydraulic retention time of 16 days and a loading rate of 3.0 kg VS m −3 day −1 , have been assumed.A biogas production rate of 151 m 3 t −1 OFMSW, with a 64% methane content is considered based on a previous work using a high-quality source separated OFMSW [40].The obtained biogas is used to produce heat and power by means of a biogas engine (CHP unit) [39].According to Tampio et al. [41], the quantity of produced digestate is calculated by subtracting the mass of biogas from the total input material (including water) and taking into account the biogas composition and the component densities (CH 4 0.72 kg m −3 and CH 2 1.96 kg m −3 ).Another decanter centrifuge is used to recover the digested solids considering a rate of 20% of the input [41], which then is transferred to a hygienization unit for pasteurization, as specified in European Regulation N• 142/2011, before being used as a cosubstrate in the SSF in Scenarios I and III (Section 2.2.3) or as fertilizer in Scenarios II and IV (Figure 1).In this study, the fate of the liquid digestate has not been considered as the scope is within the solids fraction and previous works have extensively evaluated it [40,41].To simulate the energy consumption of the anaerobic digestion process, an average electric energy consumption of 0.2 kWh t −1 of input material [41,42] has been considered.Thermal energy consumption has been specifically calculated as the energy required to heat the input material (grinded OFMSW and water) until mesophilic temperatures (40°C).To do so, it is assumed that the specific heat capacity of the input flow is the same as water (4.18 kJ kg −1 ºC −1 ) as in [41].The energy required for heating the solid digestate in the hygienization step (75°C) is calculated following the same principle (see Supplementary material).The conversion of the produced biogas into heat and electricity in the CHP unit was done by calculating the electricity yield (kWh t −1 ) using the methane content (64%), the conversion factor of 10 kWh per m 3 of methane and the conversion efficiencies in the CHP unit of 38% for electricity and 48% for heat [41,43,44].The energy consumption of the CHP unit is assumed to be 10% of the total energy produced [41].The generated heat is assumed to be used in the plant using a heat exchanger system for heating applications (anaerobic digestion, hygienization, enzymatic hydrolysis and drying steps) [39].A 20% of the total heat produced has been accounted for as heat losses [45].Electricity is also assumed to cover the plant requirements and, when excess is produced, to be sold to the electrical grid. Solid-state fermentation for enzyme production In Scenarios III and IV (Figure 1) it is proposed that the enzymes consumed in the enzymatic hydrolysis are produced on-site since it is well-known that the cost of commercial enzymes is a limiting factor for their application [25,46].To do so, an SSF process for enzyme production has been simulated based on data from previous studies [13,47].It has been demonstrated that OFMSW is an adequate substrate to produce crude enzymes using fungal species, such as Aspergillus awamori [13,48].For the simulation, 50% of the grinded and autoclaved OFMSW has been mixed with a bulking agent and inoculated with A. awamori at a concentration of 10 6 spores g −1 , using an inoculum previously grown in seed bioreactors.The SSF process duration is five days including a day for loading and unloading operations.This fermented solid rich in crude enzymes was mixed with the remaining 50% of the sterile OFMSW, diluted with distilled water to attain a solid load of 20% and heated to 50°C following the conditions explained in Section 2.2.2.According to [13], the fermented solid contained 24 U g −1 of maltase activity, 39 U g −1 of glucoamylase activity and 2 U g −1 of cellulase activity, whereas the reported activity for Viscozyme L® is ≥ 100 FBU g −1 .It was decided to use the whole SSF solids to ensure a complete utilization of the sugars in the OFMSW and simplify the process. Economic analysis To compare the different scenarios, their economic performance was studied by estimating the capital cost, operation cost and revenue generation.Then, cumulative cash flow was calculated and the profitability was assessed by evaluating different techno-economic indicators. Total capital cost estimation The total capital cost includes the fixed capital investment (FCI) and the working capital cost.The FCI refers to the total cost of designing, constructing, installing, and modifying the plant [49].It is estimated by applying calculation factors to the purchase cost of the equipment for mixed fluids-solids processing plants [49].In Table 2 the details of the estimation assumptions for the economic evaluation are presented.For the equipment purchase cost, first, the size and capacity characteristics were estimated based on the data from the material balance (Section 2.2), considering the flow rate and the residence time in each unit [50,51].Then, the cost was calculated using available recent cost data and the factor method [49][50][51].The specifications considered for each equipment, as well as, the details for calculating the equipment purchase cost can be found in the Supplementary material (Table S3 and Table S4).Neither equipment for storage nor for solids movement, such as conveyors or trucks, were considered in this estimation.The working capital cost, which represents the capital needed for maintaining plant operations, is recovered at the end of the plant life [49].Besides, no further capital cost is considered to be recovered after the lifetime of the plant. Operational cost estimation The annual operating cost includes the fixed operating cost (FOC), which is equal for all scenarios, and the variable operating cost (VOC), which is dependent on the material flow of each scenario [49].Abad et al. [39] provided data on operational costs per ton of treated waste for a local OFMSW anaerobic digestion treatment plant, which have been used to estimate the cost of labor, wastewater treatment and analysis based on the processing capacity of the biorefinery as indicated in Table 2.The cost of utilities was obtained from the Catalan Institute of Energy and the Catalan Water Agency as the average price for industries of the last two years (2022-2021) [52,53].For demineralized water, the cost has been estimated to double the price of raw water as indicated by [49].The commercial enzyme (Novozymes Viscozyme L ®) used in this study has an indicative cost of around 13.8 € kg −1 (15 $ kg −1 ) [54].The cost of the bulking for the SSF processes has been disregarded because a recovery system has been implemented.Other components in the operating cost are summarized in Table 2. Revenue estimation In addition to sales of products, the treatment service fee of OFMSW also generates revenues (Table 2).The OFMSW treatment fee was obtained from the Catalan Agency of Waste [55].For the mixed sugar syrup obtained after the enzymatic hydrolysis, the price was set to the minimum of the raw sugar market 0.37€ kg −1 [56], which was half of that for purified sugar syrups [11].For the solid biopesticide, an estimation was made based on the quantity of active ingredient (spores of Bacillus thuringiensis var israelensis) in comparison with available products in the market, such as VectoBac WG®.The price of the solid digestate as fertilizer was set as that of compost from OFMSW [39]. Profitability analysis The techno-economic indicators used to determine the most cost-effective scenario under the prevailing conditions were the gross profit margin, the net production cost, the pay-back time, the return on investment (ROI) and the net present value (NPV) [49].First, the net cash flow was calculated over the plant's lifetime by assuming a 3-year construction period, a corporation tax rate of 25% and 10 years for depreciation.The gross profit margin (%) indicates the portion of revenue remaining after subtracting the operating cost and serves as an indicator of the efficiency of the plant.The net production cost (€ t −1 ) is the operating cost per t of treated OFMSW.The payback time (years) refers to the time needed to recover the initial investment cost while the ROI (%) refers to the rate of cash return on that investment.The ROI has been calculated over the lifetime of the plant as indicated in Equation 1: Finally, the NPV was calculated by discounting the future cash flows, including the initial investment, where CF t is the net cash flow during the period t (including the initial investment), and i is the interest or discount rate, which was assumed to be 6%.The interest rate at which the NPV is zero is known as the internal rate of return (IRR) and also indicates the efficiency of the investment.The higher the discount rate the longer it takes for the plant to reach a positive NPV [15]. Sensitivity analysis Sensitivity analysis is a method to examine the effects of uncertainty in model input parameters on the economic viability of the project and to identify the parameters that have the greatest impact on the outcome [49].By using NPV as the economic indicator, the analysis provides insight into the level of risk associated with making predictions about the future performance of each scenario.The model assumptions independently evaluated at a ±25% variation from reference values were enzyme cost, biopesticide price, direct field cost, biomass production rate and sugar yield.It is represented in a static tornado diagram as the range of the output variation (NPV) for each variable over the specified range.Furthermore, the effect of unexpectedly higher downtime time was evaluated by varying the production rate while keeping capital and fixed operating costs constant [49]. Mass balance and bioproducts obtained The proposed biorefinery in this study processes 100,000 t year −1 of OFMSW, with 25% of dry matter content.Different bioproducts are obtained depending on the scenario evaluated (Figure 1): sugar syrup, solid biopesticide (Scenarios I and III), solid fertilizer (Scenarios II and IV) and energy.Table 3 summarizes the overall component balance for each scenario.Enzymatic hydrolysis was the first bioprocess performed in all scenarios to recover 5,614 t year −1 sugars for the commercial enzymes (Scenarios I and II) and 2,954 t year −1 sugars for the on-site produced enzymes (Scenarios III and IV).This corresponds to a sugar production yield of 0.2 g of sugar per gram of autoclaved OFMSW, considering that 30% (30,000 t year −1 ) and 15% (15,000 t year −1 ) of the total OFMSW input is directed toward enzymatic hydrolysis, respectively.From the residual solids after the enzymatic hydrolysis 42,138 t year −1 (Scenario I) and 32,207 t year −1 (Scenario III) of solid biopesticide can be produced by SSF.Therefore, by redirecting a part of the incoming high-quality OFMSW, two high-value products can be obtained besides the energy produced in the anaerobic digestion. In scenarios I and III, the solid digestate was used as a cosubstrate in the SSF process, contributing to the control of pH and maintaining its value close to the neutral range, adequate for Bacillus thuringiensis growth [17,32].On the other hand, in scenarios II and IV, where no SSF for biopesticide production is performed, the produced digestate (25,551 t year −1 and 22,890 t year −1 , respectively) is directly sold as fertilizer, which is four times cheaper than the biopesticide (Table 2).In these scenarios, the residual solids after the enzymatic hydrolysis are directed toward anaerobic digestion generating 28% of additional energy.In Figure 2, the total energy consumed and produced in each scenario is presented.The main energy-consuming step in all scenarios is enzymatic hydrolysis, mainly due to the drying equipment for the concentration process (see Supplementary material Table S3), which consumes from 68%-66% in Scenarios I and II to 54%-53% in Scenarios III and IV.The decrease in energy consumption in Scenarios III and IV is a result of redirecting a smaller quantity of OFMSW to enzymatic hydrolysis, which consequently leads to a reduced input of liquid hydrolyzate into the concentration step.Water evaporation has been described as a significant energy-consuming step in organic waste biorefineries using enzymatic hydrolysis [57].Consequently, implementing energysaving strategies in the drying process can have a significant impact on the overall energy consumption.For example, achieving higher sugar concentrations would result in reduced water evaporation, and thus, leading to lower energy consumption, as assessed later in the sensitivity analysis (Section 3.4).The subsequent energy-consuming step for all scenarios is the anaerobic digestion (Figure 2), mostly due to the heating and operation of the bioreactors and the biogas converting unit (Table S3).Scenarios II and IV consume around 4-5% more than Scenarios I and III, respectively, because the solid hydrolyzate is valorized through anaerobic digestion and more bioreactors are required.Consequently, more energy is also produced as observed in Figure 2. In each scenario, the energy consumption of the SSF steps remained below 4%, which might vary if a temperature control system were to be considered.Only Scenario I, which evaluated the use of commercial enzymes and the production of biopesticides from the solid hydrolyzate, exhibited a negative energy balance.Whereas in Scenario II, the additional 146 kWh t −1 OFMSW derived from processing the hydrolyzed solid in anaerobic digestion offset the energy requirements, and in Scenarios III and IV, the energy savings in the enzymatic hydrolysis step also lead to positive energy balances. Capital and operating cost A summary of the capital and operating costs is presented in Table 4.The capital investment needed for each scenario was calculated based on the cost of the equipment required to perform the processes, as shown in Figure 1, with detailed equipment costs provided in the supplementary material (Table S4). The main difference among the four scenarios is that in Scenarios II and IV, the SSF for biopesticide production is not conducted, which accounts for 33% and 26% of equipment cost in Scenarios I and III respectively, hence resulting in capital cost savings.Furthermore, Scenarios III and IV include the additional process of SSF for enzymes production, which increases capital cost by 8% and 10% with respect to Scenarios I and II (Table 4).The main contributor to the capital investment for all scenarios is the anaerobic digestion process, which makes sense considering that it treats 70% of the OFMSW input into the plant, followed by SSF processes.Additional expenses for constructing the plant were projected based on the cost of equipment (Table S4). Regarding operating costs (Table 4), a noticeable difference can be seen depending on whether the enzymes are purchased (Scenarios I and II) or produced on-site (Scenarios III and IV).This is a result of the high cost of the commercial enzymes, which are responsible for 74% of VOC in Scenario I, where energy costs account for an additional 17%, and 91% of VOC in Scenario II, where energy balance is positive and no input is needed.The sensitivity of the financial analysis to enzyme costs is addressed in Section 3.4.The other contributors to VOC are negligible in comparison (Table S5).FOC accounts for 23% of the operating cost in Scenarios I and II and 83% in Scenarios III and IV.Within this category, labor costs remain consistent for all scenarios, while, maintenance and insurance costs are directly related to the capital investment and consequently experience minimal variations.Based on the operating costs, producing enzymes on-site in the biorefinery effectively reduces the net production cost per t of OFMSW treated by half, thereby indicating greater profit margins. Revenues and investment analysis The revenues and total income generated per t of OFMSW treated in the different scenarios are shown in Table 4.The profit comes from selling sugar syrup, biopesticides or fertilizer and electricity, and also from the treatment fee, which was equal for all scenarios.This treatment fee represents 14-17% in Scenarios I and III, where the solid biopesticide is produced, and up to 40-32% in Scenarios II and IV, where the solid hydrolyzate is redirected into energy production.Therefore, it is observed that the economic viability of the biorefinery heavily depends on policy instruments when energy production is prioritized instead of high-value products.Scenario I generated the highest revenue at 223 € t −1 OFMSW, whereas Scenario II produced the lowest one at 87 € t −1 OFMSW.In fact, for Scenario II, the revenues were lower than the net production cost, indicating that the OFMSW was not financially viable in these circumstances.Therefore, the use of enzymes in anaerobic digestion without significant production of higher-value bioproducts is limited by their cost.According to Panigrahi et al. [58], this is the same reason for the limited application of enzymes as a pretreatment of OFMSW.The highest cost allowed to achieve a profitable anaerobic digestion process has been found to be 0.28 € L −1 [59], so around 50 times lower than the actual one (Table 2).The market price of novel bioproducts, such as enzymes or solid biopesticides, can depend on different factors, such as intended application or regional policies, so the sensitivity of the economic viability to the selling price of this product is evaluated in Section 3.4.Scenario III, with on-site enzymes production and the use of the solid hydrolyzate for biopesticide production, was the only scenario that proved to be profitable under the circumstances evaluated, as indicated by a positive NPV for the lifetime considered at a 6% discount rate (Figure 3).Taxes are only present in scenarios with a positive net profit.Scenarios I and IV presented a positive gross margin, a payback time lower than the considered lifespan of the plant, and a positive ROI, but a negative NPV.For Scenario I, with commercial enzymes and biopesticide production, it can be observed that the VOC are substantial (Figure 3) and not efficiently covered by the revenues.This is attributed to the cost of enzymes and the negative energy balance (Table S5).On the other hand, in Scenario IV, with on-site enzymes production and the use of solid hydrolyzate for energy production, the variable costs are significantly reduced (Figure 3) obtaining an acceptable gross profit margin of 60% (Table 4).However, the process is still not profitable in terms of NPV, indicating that future cash flows may not be sufficient to justify the initial investment over the long term.Considering, the impact of the capital investment the sensitivity of the financial analysis to the direct fixed cost is addressed in Section 3.4.Therefore, for an OFMSW biorefinery using enzymes to be economically viable, it should not only be energetically self-sufficient but also generate adequate revenues to cover the operating cost associated with enzymes usage and justify the increased capital investment.The integration of enzymes production on-site lowers the operating cost to an acceptable level as for lignocellulosic biorefineries [60]. In comparison with the current alternative for treating OFMSW based on anaerobic digestion and composting technologies (without heat recovery) [39], which presents a total income of 33.5 € t −1 of OFMSW (considering a 1.6 increase factor in the price of energy since 2019), the OFMSW biorefinery evaluated in Scenario III represents a 5-fold increase in revenues.Overall, it can be said that SSF can be integrated into OFMSW treatment plants as a supplementary tool to enhance flexibility and produce higher-value bioproducts, rather than serving as a replacement for consolidated technologies.Nevertheless, to ensure the viability of the process, the quality of the collected OFMSW must be guaranteed in the first instance. The present study has been focused on the valorization of the solid hydrolyzate, but in literature there are plenty of examples of the valorization of the liquid fraction rich in sugars into high-value products through fermentative systems.For instance [10], proposed a profitable biorefinery scenario for the production of sophorolipids from food waste.The integration of such a process into the OFMSW biorefinery could increase its profitability.However, large capital investment and energy consumption typical of sophisticated liquid fermentation processes might represent a burden for the economic viability and further studies are required in this sense.The market value of the bioproducts and its predictability would be the key point. Sensitivity analysis Given the fluctuations in the global economic environment and the uncertainties in technological assumptions, it is necessary to assess the investment risk for significant parameters.The impact of fluctuations in cost parameters identified as key in Section 3.3 (i.e.enzyme cost, biopesticide price, and DFC) on the economic performance was evaluated by a sensitivity analysis.As well as, the efficiencies considered for the enzymatic hydrolysis and anaerobic digestion processes by variations in the sugar yield and biogas production rate, respectively.A tornado diagram for all scenarios is shown in Figure 4, in which each parameter is changed to ±25% of its reference value while keeping the others at the reference values.The results indicate that NPV is mostly affected by the biogas production rate, followed by the enzymes cost in Scenarios I and II and the DFC.Biogas production rate is directly related to the energy balance of the plant and, therefore, to the operating cost due to the electricity requirements.Especially, in those scenarios in which the solid hydrolyzate is used for energy production (II and IV) and biopesticides are not produced.The degree of impact from the DFC, however, depends on the capital investment specific to each scenario.For instance, in a comparison between Scenarios I and III, with a ±25% change in the DFC, the NPV is affected by about 81% in Scenario III and 16% in Scenario I.This observation can be explained by the need for additional capital investment for the SSF process for enzymes production in Scenario III, which results in a more significant impact on the NPV compared to Scenario I. Biopesticide price and sugar yield are the less influencing parameters of those studied, which can be explained considering that smaller allocation of OFMSW into the novel valorization pathway.For all the variables examined, Scenarios I and II consistently exhibited unprofitable outcomes, while Scenario III yielded profitability.Notably, Scenario IV was the most affected, indicating that with certain improvements, such as a higher biogas production rate or lower direct field costs, it may become profitable. The profitability of Scenario III was evaluated in further detail.Cumulative cash flows at different discount rates are presented in Figure 5.The capital investment was spent during the 3-year construction time, leading to negative initial values, but then cumulative NPV increases as annual profits are generated over the lifespan of the plant.The IRR was <12.8%, indicating that the uncertainty of investment for the OFMSW biorefinery can be duplicated and still reach a break-even point by the end of the project.This value is lower in comparison with other OFMSW biorefinery configurations producing sugar syrups (15%-88%) [11] or sophorolipids (19%-36%) [10].In both of these examples, the production of bioproducts with higher market values resulted in net present values ranging from two to seven times greater.Additionally, in other sugar syrup productions [12], the higher IRR values observed result from lower capital investment because only the sugar production pathway is considered and not an integral near zero waste approach as in this study.Finally, considering the relevance of the capital investment (Figure 4), the plant should operate at full capacity to maximize profitability.However, downtime is unavoidable due to maintenance operations, equipment failure or power cuts, and, when prolonged, it results in inefficient use of capital investment and reduces the profitability of the plant.Hence, the critical processing capacity required to reach a breakeven point was conducted.As depicted in Figure 5, the breakeven point is observed at 22%, which indicates that to generate a profit, the plant must process a minimum of 22,200 t of OFMSW waste annually. Conclusion In this study, four different scenarios integrating the use of enzymes in an OFMSW treatment plant using anaerobic digestion were studied based on the simulation of technically proven processes.Two novel bioproducts, biopesticide and sugar syrup, can be obtained by redirecting a portion of the incoming OFMSW.The profitability analysis of the four different scenarios assessed the origin of the enzymes and the allocation of the solid hydrolyzate.It revealed that high-value bioproducts, other than the energy produced in the anaerobic digestion, are required to justify the higher operating cost associated with the use of commercial enzymes, which represent up to 91% of the variable cost.On-site enzyme production can reduce the operating cost by 70% while increasing the capital cost by around 10%.However, further research is required to validate the enzyme production process simulated in this study.Furthermore, future research trends should focus on optimizing and scaling up low-cost production systems for obtaining enzymes from OFMSW to achieve a selfsufficient biorefinery with a closed-loop production system. Figure 1 . Figure 1.Process flowchart of the different biorefinery scenarios for the application of enzymatic hydrolysis in OFMSW treatment.(a) Scenario I, commercial enzymes and solid hydrolysate valorization through SSF.(b) Scenario II, commercial enzymes and solid hydrolysate valorization through anaerobic digestion.(c) Scenario III, in situ produced enzymes and solid hydrolysate valorization through SSF.(d) Scenario IV, in situ produced enzymes and solid hydrolyzate valorization through anaerobic digestion. Figure 2 . Figure 2. Total energy consumption and production per ton of processed OFMSW in the different biorefinery processes for each scenario. Figure 3 . Figure 3. Investment cost, operating cost, revenues and cumulative net present value (NPV) for the four OFMSW biorefinery scenarios.FOC, fixed operating cost.VOC, variable operating cost. Figure 4 . Figure 4. Static tornado diagram for each scenario showing the sensitivity of net present value (NPV) to the variation (±25%) in each variable while other variables are held constant.The nominal value is displayed as a vertical line. Table 1 . Examples of published works on SSF for enzyme production. BIOENGINEERED Table 2 . Parameters for the economic evaluation. Table 3 . Overall component balance in a year for each scenario. Table 4 . Overall economic evaluation of the four OFMSW biorefinery scenarios.VOC, variable operating cost.FOC, fixed operating cost.ROI, return on investment.NPV, net present value.
2024-01-25T06:17:21.327Z
2024-01-24T00:00:00.000
{ "year": 2024, "sha1": "28899881c4ae36714653728480d46092b3ecffb0", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/21655979.2024.2307668?needAccess=true", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "05fae66b6324c63adaf1295ea63ec8f5f1d98aa9", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
269605797
pes2o/s2orc
v3-fos-license
Identifying Narrative Patterns and Outliers in Holocaust Testimonies Using Topic Modeling The vast collection of Holocaust survivor testimonies presents invaluable historical insights but poses challenges for manual analysis. This paper leverages advanced Natural Language Processing (NLP) techniques to explore the USC Shoah Foundation Holocaust testimony corpus. By treating testimonies as structured question-and-answer sections, we apply topic modeling to identify key themes. We experiment with BERTopic, which leverages recent advances in language modeling technology. We align testimony sections into fixed parts, revealing the evolution of topics across the corpus of testimonies. This highlights both a common narrative schema and divergences between subgroups based on age and gender. We introduce a novel method to identify testimonies within groups that exhibit atypical topic distributions resembling those of other groups. This study offers unique insights into the complex narratives of Holocaust survivors, demonstrating the power of NLP to illuminate historical discourse and identify potential deviations in survivor experiences. Introduction In recent decades, significant efforts have been made to gather the accounts of the remaining Holocaust survivors.The passing of the last living witnesses and the beginning of the era of the posttestimony occurs simultaneously with technological developments in NLP.The wealth of testimonies in the archives presents a challenge: how to preserve the significance of individual stories within a vast collection of a thousand testimonies, while also giving voice to the collective body of testimonies in a manner that honors the individuality of each story.By employing techniques such as contextualized topic modeling and topic narrative analysis, we aim to uncover broad trends within the collection, while ensuring the preservation of the uniqueness and integrity of each personal narrative. Despite advancements in NLP, representation of long texts still poses a challenge to state-of-the-art models (Piper et al., 2021;Castricato et al., 2021;Mikhalkova et al., 2020;Dong et al., 2023).Antoniak et al. (2019) pioneered the representation and visualization of narratives as sequences of interpretable topics.And while previous topic modeling analyses of Holocaust testimonies (Blanke et al., 2019) have provided valuable insights, they treated the corpus as a monolithic body of text, obscuring the unique narrative structure of individual testimonies.Furthermore, using non-contextualized topic modeling such as LDA ((Blei et al., 2001)) treated the text as a body of words without order.Recent advancements in topic modeling techniques such as BERTopic (Grootendorst, 2022), and other Contextualized Topic Modeling (Bianchi et al., 2020;Angelov, 2020;Pham et al., 2023) leverage language model representation to better identify and predict the text topics.While such methods were applied to Holocaust testimonies (Wagner et al., 2022), the main focus was on the segmentation of the testimonies for topic modeling.Our contributions are as follows: • We apply a novel contextualized topic modeling approach, BERTtopic, to holocaust testimonies, revealing the main themes and their distribution. • We examine the evolution of topics across aligned sections of testimonies, revealing a typical narrative scheme. • We investigate how age and gender are expressed in the narrative structure of testimonies, highlighting distinctions between survivor subgroups. • We introduce a novel method for identifying divergent testimonies, i.e., testimonies within a given group that exhibit atypical topic distributions, resembling patterns more characteristic of other groups.We demonstrate it in a casestudy of different age groups. We note that related contributions appear in an unpublished paper of ours (under review; anonymized) that uses an earlier contextualized topic model (CTM; Bianchi et al., 2020) for a similar process.The current paper uses a better performing model (Grootendorst, 2022) regarding topic di- versity and coherence and a more detailed and precise narrative analysis approach. Corpus Level Statistics This paper analyzes transcripts from the USC Shoah Foundation, a corpus containing 1000 oral testimonies in English.Survivors originated from over 30 countries, with a significant representation from Poland and Germany.The testimonies were recorded between 1996 and 2015, offering insights into the survivors' experiences decades after the events of the Holocaust.The length of the testimonies ranges from 3K to 88K words, with a mean length of 23K words.Each testimony contains an average of 250 questions, with the majority of question-answer pairs (95%) consisting of no more than 400 words.Fig. 1 illustrates the distribution of testimony lengths. BERTopic: Topic Analysis We use BERTopic to identify the topics within the corpus.Preprocessing involves the merging of consecutive very short sections (question-answer pairs <200 words) and the division of very long sections (>450 words) to mitigate potential outlier effects.BERTopic leverages all-MiniLM-L6-v2 (Wang et al., 2020) document embeddings and a TF-IDF based clustering approach, providing a contextaware analysis that surpasses traditional methods like LDA (Blei et al., 2001).For dimensionality reduction, UMAP (McInnes and Healy, 2018) is employed before clustering with HDBSCAN (McInnes et al., 2017).Unlike LDA, BERTopic dynamically determines the number of topics only by determining the minimum cluster size for HDBSCAN, resulting in greater flexibility.Our dataset yielded 58 topics, with approximately 4% outliers classified as "unknown topic".We set the minimum cluster size to be 50 sections. To ensure interpretability, BERTopic extracts c-TF-IDF 1 word representations from each section's 1 https://maartengr.github.io/BERTopic/api/ctfidf.html cluster, revealing the importance of words within each topic.The most representative word is selected for initial topic representation.A domain expert then manually reviews these word sets and assigns a descriptive title to each topic, ensuring both accuracy and clarity.Notably, the topics detected by the model align with those outlined in the USC Shoah Foundation's interviewer guidelines2 but also extend way beyond them.The guidelines encourage interviewers to ask about pre-war life, family, religion, politics, community, and experiences of antisemitism.The model's successful detection of these themes confirms the effectiveness in identifying core key topics. Narrative analysis This study analyzes individual survivor testimonies as narratives -sequences of interpretable topics (Antoniak et al., 2019).We aim to construct comprehensive narratives from the corpus testimonies that enable comparisons without sacrificing their temporal structures.Several challenges arise in this analysis.First, each testimony comprises a large number of sections (250 on average), conflicting with the direct interpretation and visualization purposes.Secondly, variations in testimony length complicate direct comparisons of narrative structures. To address this, we divide testimonies into a fixed manageable number of parts, defining the part's theme representation as the distribution of its sections' topics.This division requires considering the trade-off between preserving temporal detail and achieving clear visualization and comparison.A large number of parts yields more nuanced narratives but risks excessive details and redundancy, whereas fewer parts allow better interpretation and visualization at the expense of obscuring finer temporal shifts in topics.After careful examination and consultation with domain experts in Holocaust studies and digital humanities, we divide testimonies into 15 equal parts.This strikes a balance between the need for detail and the goals of clear visualization and comparisons of topic distribution across parts. Typical Testimony Narrative Schema This section examines the most common topics covered in each part of a Holocaust survivor testimony, as well as the variation in topic representation between the different testimony parts.The analysis is based on Fig. 3, which shows the distribution of topics across the 15 parts into which each testimony was divided.The first part of all testimonies is dominated by the topic of self-presentation followed by the family topic.It is perhaps unsurprising that many testimonies begin this way, as survivors introduce themselves and their families to the interviewer.The fact that the self-presentation topic rarely appears later may not constitute a significant finding, but it is nevertheless of importance as it validates the model's analytic capability.The next two parts of the testimonies also reveal a number of common topics, associated with the description of community life before the war.These include family, education, religion, house, and sport.The latter part also contains the topic of war news, hinting at the events to come. In contrast to the dominance of common topics at the beginning of the testimonies, the middle parts show greater variance in the topic distributions.Each part typically features several common topics with similar percentages (around 5-15%).This might reflect the diversity of experiences among Holocaust survivors.The middle part topics vary starting with ghettos and war news to concentration/death camps and food, resolving in the rise in dominance of the camps liberation topic. In the final parts of the testimonies narratives once again the model identifies a few dominant topics.These include interview-related topics such as presenting family pictures and discussing life after the Holocaust topics.Additionally, topics related to life after the war emerged, such as immigration and establishing work and marriage in new countries. In conclusion, the BERTopic model successfully identifies a typical structure for Holocaust survivor testimonies, particularly at the beginning and end. The middle sections show more variation, reflecting the different experiences of individual survivors. Gender and Age as Expressed in Testimonies Narratives This section introduces a method for comparing the narrative trajectories present within different groups of Holocaust survivor testimonies.We apply this method to investigate gender-and age-based differences in testimonial narratives. To begin, we compute a typical testimony path for each group under consideration (e.g., male vs. female, young vs. older survivors).This 'typical' testimony schema represents the average topic distribution across the 15 fixed parts.Next, we perform t-tests for each part to quantify the differences in topic prevalence between groups.Topics with a substantial t-value (above 3.5%) and a low probability of such deviation arising by chance (pvalue under 0.01) are flagged as characteristic of the group in which they are more prevalent. Let's consider the age-based comparison between younger survivors, born 1925-1940, experiencing the Holocaust as children (522 testimonies), and older survivors, born 1902-1925, adults during the Holocaust (467 testimonies).Fig. 4 reveals interesting distinctions.Topics like "Childhood Memories" and "Food" dominate the middle parts of younger survivors' testimonies, while "Life Perspective" features in the final parts.Conversely, "Marriage", "Work", and "War News" are more prominent in the middle of older survivors' accounts.Interestingly, while education-related topics seem more prevalent at the beginning of older survivors' tes- timonies, they tend to re-emerge near the end for the younger group.Turning to the gender-based analysis, with a balanced corpus of 531 male and 469 female testimonies, Fig. 5 highlights potential differences.Topics like "Bar Mitzvah", "Army", "Camp Liberation", and "Work" are more characteristic of men's testimonies.In contrast, "Birth", "Childhood Memories", "Parents", and "Marriage" are more prevalent in women's testimonies.This analysis reveals how men and women may structure their narratives differently, particularly in the middle sections of their testimonies. The USC Shoah Foundation's interviewer guidelines do not provide specific instructions for ordering topics or tailoring questions based on the subject's age or gender.This suggests that the observed differences in narrative structure between these groups are not a direct result of the guidelines.Rather, they may stem from the interviewers' individual approaches or the survivors' unique experiences and perspectives. Exploratory Study Identifying Diverging Narratives This study introduces a novel method for identifying testimonies within a specific group that exhibit topic distribution patterns more characteristic of another group.Our goal is to pinpoint narratives that stand out as atypical within their designated category.We achieve this by defining a scoring function that quantifies the similarity between a testimony's topic distribution and the typical narrative patterns of a different group.Let us formalize the scoring function which yields a high score for testimonies from A that resemble the narrative patterns typical of B. Let t = (t 1 , t 2 , ..., t 15 ) represent the testimony's topic distributions from group A, where each t i is a vector of topic probabilities for part i.And, C A = {(i 1 , j 1 ), (i 2 , j 2 ), ..., (i n , j n )} denotes the characteristic topic-part pairs for group A, where i x is a part index and j x is a topic index.These pairs have high t-values (>3.5) and low p-values (<0.01) in the group comparison.C B similarly represents the characteristic topic-part pairs for group B. Finally, we apply an argmax operation to spot those testimonies exhibiting the highest resemblance to group B's typical narrative: When comparing older and younger survivor groups, Fig. 7 presents the distribution of resem- blance scores for testimonies within the groups.The uneven distribution favoring negative scores reveals that higher scores tend to be related to nonconforming narratives.Using this method, Fig. 6 highlights two specific examples: a younger survivor whose narrative strongly resembles the older group, and vice-versa emphasizing topics characteristic of the opposite group. Conclusion and Future Work This study applies NLP techniques to explore the complex narratives within the USC Shoah Foundation's Holocaust testimonies.Contextualized topic modeling with BERTopic reveals key themes and their distributions within the corpus.And, by aligning testimonies into fixed parts, we unveiled a common narrative trajectory along with age-and gender-based variations.Our method detects divergent testimonial narratives, identifying those within one group that exhibit topic patterns characteristic of another group.Future Work will extend the analysis by comparing survivor narratives across corpora3 and other testimonial archives to identify both shared and distinct narratives patterns. Figure 1 : Figure 1: Testimonies number of words and number QA-pairs histogram. Figure 4 : Figure 4: Adults vs. young survivors typical testimony t-test.The Black Point represents values with p-values under 0.01. Figure 5 : Figure 5: Men vs. women survivors typical testimony t-test.The Black Point represents values with p-values under 0.01.
2024-05-07T06:45:04.140Z
2024-05-04T00:00:00.000
{ "year": 2024, "sha1": "d281b68c8dc3f98694b4ea4800025515e640812f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ArXiv", "pdf_hash": "d281b68c8dc3f98694b4ea4800025515e640812f", "s2fieldsofstudy": [ "History", "Computer Science", "Linguistics" ], "extfieldsofstudy": [ "Computer Science" ] }
91186413
pes2o/s2orc
v3-fos-license
Effect of Different Compatibilizers on Sustainable Composites Based on a PHBV/PBAT Matrix Filled with Coffee Silverskin This work investigates the feasibility of using coffee silverskin (CSS), one of the most abundant coffee waste products, as a reinforcing agent in biopolymer-based composites. The effect of using two compatibilizers, a maleinized linseed oil (MLO) and a traditional silane (APTES, (3-aminopropyl)triethoxysilane), on mechanical and thermal behavior of sustainable composites based on a poly(butylene adipate-co-terephthalate/Poly(3-hydroxybutyrate-co-3-hydroxyvalerate) PBAT/PHBV blend filled with coffee silverskin, in both the as-received state and after the extraction of antioxidants, was studied. Thermal (by differential scanning calorimetry), mechanical (by tensile testing), and morphological properties (by scanning electron microscopy) of injection molded biocomposites at three different weight contents (10, 20, and 30 wt %) were considered and discussed as a function of compatibilizer type. The effects of extraction procedure and silane treatment on surface properties of CSS were investigated by infrared spectroscopy. Obtained results confirmed that extracted CSS and silane-treated CSS provided the best combination of resistance properties and ductility, while MLO provided a limited compatibilization effect with CSS, due to the reduced amount of hydroxyl groups on CSS after extraction, suggesting that the effects of silane modification were more significant than the introduction of plasticizing agent. Introduction Drivers such as sustainability, energy efficiency, reduced waste generation, and greenhouse gas emission are emerging powerfully in the current industrial economy, though it is still dominated by a linear and extract-process-consumption disposal philosophy that makes it highly and inherently unsustainable [1]. On the contrary, the recently introduced circular economy model aims at sustainable development by reducing waste and maintaining the value of resources and products for as long as possible, extracting the maximum value from them whilst in use, then recovering and regenerating products and materials at the end of their service life. This goal can be accomplished by using renewable energy, by limiting toxic chemicals, by developing bio-benign products, and finally by elimination of waste [2]. In this context, bio-based sourcing of plastics along with waste valorization approaches could significantly contribute to the adoption of a circular economy model. As regards the waste valorization to high-value added products, agro-food waste presents profitable opportunities due to its great world-wide availability. Therefore, it is of utmost importance the possibility to develop highly sustainable composite materials that combine biopolymers as matrix and agro-industrial residues as fillers. A major source of residues is represented by the coffee industry in the form of defective beans, coffee silverskin (CSS), and spent coffee grounds (SCG), which indeed can cause severe contamination and environmental problems if discarded in landfills [3,4]. It is not surprising that over the last years several efforts have been made to valorize the by-products resulting from coffee processing. SCG have been extensively investigated as filler in composite materials [5][6][7][8][9][10], as source of important ingredients such as oil, terpenes, caffeine, and polyphenols [11,12], or more recently as a source for Quantum Dots [13]. Indeed, comparatively less attention has been paid to the major by-product of the coffee roasting process, i.e. coffee silverskin, which is a thin tegument forming the outer layer of coffee beans that is discarded due to the expansion of the beans occurring during roasting [4,14]. Traditional uses of CSS include mainly compost and soil fertilization but, more recently, its chemical composition rich in dietary fibers, phenolic compounds, and melanoidins, aroused a lot of interest in food, cosmetic and pharmaceutical industries [15]. Unfortunately, these methods cannot be considered the most efficient in terms of value addition, especially in view of the large availability of such residues, which is estimated at around 2 billion tons per year [16]. In this framework, a sustainable exploitation of these residues is their use as filler in new biodegradable materials, which has been rarely addressed in literature [10,17]. In a previous work [17], the feasibility of using coffee silverskin as a reinforcing agent in biocomposites based on a PBAT/PHBV commercial blend was demonstrated, even if issues were raised about the poor interfacial adhesion that currently prevents a full exploitation of the potential of such environmentally friendly composites. The use of compatibilizers has been widely proposed in literature with the aim to increase the interfacial adhesion between natural fillers and polymer matrices. In order to preserve the environmentally friendly character of these biocomposites, a cost-effective solution can rely on the use of natural compatibilizers as an alternative to standard fossil-based compatibilizers. In this regard, vegetable oils are an interesting class of additives that can combine a plasticizing and a compatibilizing effect [18][19][20][21]. In view of all these issues, this work reports for the first time on the effect of two compatibilizers, a maleinized linseed oil (MLO) and a traditional silane, on sustainable composites based on a PBAT/PHBV blend filled with coffee silverskin in both the as-received state and after the extraction of antioxidants. To this purpose, the thermal, mechanical, and morphological properties of injection molded biocomposites were characterized and interpreted as a function of compatibilizer type. Materials Coffee silverskin was obtained from a coffee-roasting company located in Rome (Italy) in the form of a blend of 75% (w/w) Arabica and 25% (w/w) Robusta. CSS was used in the as-supplied state (CSS_N) and also after extraction of antioxidants (CSS_T). A commercial grade of a biopolymer blend (65% PBAT-35% PHBV) supplied by Nature Plast was used as matrix for biocomposites. Characterization of Coffee Silverskin Extraction of antioxidants from coffee silverskin was carried out in batch mode using water as solvent. Fifty grams of the material (75% of Arabica and 25% of Robusta) and 1 L of distilled water were loaded into a cylindrical glass vessel (1.5-L working volume) and stirred at 40 ± 0.1 • C and 800 rpm for 60 min. The extractor was provided with a mechanical stirrer and a thermostated water jacket. The resulting suspension was paper filtered and the liquid analyzed for the determination of total phenolics and antioxidant activity. The solid residue was dried in a forced-air dehydrator (Stöckli, Switzerland) operated at 40 • C and stored at room temperature for composite preparation. Moisture content was determined by oven drying at 105 • C to constant weight. Folin-Ciocalteu method was used to determine the amount of phenolic compounds [22] as gallic acid equivalents (GAE) per unit weight of dry solid using a calibration curve obtained with gallic acid standards. Two different assays, namely DPPH and ABTS [22,23], were used for the assessment of the antioxidant activity of CSS extract. The results were expressed as Trolox equivalents (TE) per unit weight of dry solid using calibration curves obtained with Trolox standards. All the analyses were carried out in duplicate. EA3000 elemental analyzer (Eurovector, Pavia, Italy) enabled the determination of the elemental composition, while ash content was measured according to ASTM E830 and ASTM D1102. The amount of oxygen was calculated by Equation (1): The silanization treatment included a preliminary alkali treatment of CSS in an alkali water bath (5% NaOH) for 24 h with constant stirring. Then CSS was drained, washed with distilled water and subsequently dried at 60 • C for 24 h. Silanization was carried out in a water: acetone bath (50:50 v/v) containing 1% by weight of the corresponding silane with a magnetic stirring for 2 h. After this treatment, the silanized CSS was removed from the bath, drained, and dried at room temperature for 48 h. Thermogravimetric analysis (TGA) (Seiko Exstar 6300, Tokyo, Japan) was performed to assess the thermal stability of CSS up to 800 • C in a nitrogen atmosphere with a heating rate of 10 • C/min. Fourier transform infrared (FTIR) spectra were recorded on a FTIR spectrometer, JASCO, 680 plus (Easton, MD, USA), by using a KBr-pellet method in the range of 4000-400 cm −1 wavenumber. Production of biocomposites Composite specimens for mechanical characterization were manufactured by injection molding with a mold temperature kept at 25 • C and an injection temperature of 165 • C with the following pressure cycle: P injection = 6 bar (hold time=0.1 s) and 8 bar (hold time=8.5 s). The different formulations (Table 1) were previously compounded by a vertical twin-screw extruder (DSM Explore 5&15 CC Micro Compounder) with the following parameters: 50 rpm screw speed, 2 min mixing time, and temperature profile: 150-155-160 • C. Mechanical Characterization of Biocomposites Tensile tests were performed in displacement control on a Zwick/Roell Z010 (Kennesaw, GA, USA) with a crosshead speed of 10 mm/min. The dimensions of samples were in agreement with UNI EN ISO527-2 (type 1BA) and a gauge length of 30 mm was used. The results are reported as the average of at least five tests. Thermal Characterization of Biocomposites Three samples for each formulation were subjected to differential scanning calorimetry (DSC) using a Mettler Toledo 822e (Columbus, OH, USA) with the following thermal program in nitrogen atmosphere: heating from −20 to 200 • C (5 min hold), cooling to −20 • C (hold time = 5 min), and heating to 200 • C, all steps at 10 • C/min. As PBAT exhibits a very low crystallinity [24], the degree of crystallinity (X c ) of the composites was calculated according to Equation (2): where ∆H 0 m (PHBV) is the enthalpy of melting of 100% crystalline PHBV (109 J/g) [25], w is the weight fraction of PHBV in the blend, ∆H m represents the experimental enthalpy of melting of the sample (J/g), and ∆H cc is the enthalpy of cold crystallization. Morphological Analysis The filler morphology along with the fracture surfaces of composites failed in tension were investigated by scanning electron microscopy (SEM) using a Hitachi S-2500. All specimens were sputter coated with gold prior to observation. Chemical, Thermal and Morphological Analysis of Raw and Treated CSS The phenolic content of CSS was 7.6 ± 0.21 mg GAE/g dw . This value is significantly lower than the one obtained by Conde et al. [26]. This could be attributed to differences in coffee variety and roasting conditions. The CSS blend employed in the present study was richer in Arabica, which, according to Farah [27], has a lower antioxidants content compared to Robusta. In addition, Bessada et al. [16] recently showed that the geographical origin plays a significant role in the chemical composition of silverskin obtained from Coffea canephora beans, especially in terms of fatty acid profile and antioxidant composition. Furthermore, the treatment was carried out under mild temperature conditions to preserve the activity of the antioxidant compounds. Moreover, this value represents the 47% of the total phenolic content obtained by Sarasini et al. [17] employing the same CSS blend. The extraction yield obtained is quite low. This could be attributed to the used extraction conditions. In fact, according to Zuorro and Lavecchia [28], the affinity of many antioxidants for hydro-alcoholic solvents is higher than for pure water. However, in the present study, the treatment of CSS with water was required in order to remove the water-soluble components and improve the affinity with a polymer matrix. In fact, Zarrinbakhsh et al. [29] showed that a water-based green surface treatment of distiller's dried grains with solubles (DDGS), a major coproduct of the corn ethanol industry, led to a noticeable improvement in the degradation onset temperature of DDGS due to the elimination of many water-soluble components, improving the resistance of the material during melt processing of the composite. Moreover, the PHBV-based bioplastic composite obtained with water treated DDGS showed an enhanced modulus and an improved adhesion between the matrix and filler compared to the composites produced with untreated material. The removal of water-soluble compounds was confirmed in this study by the results of elemental analysis. Ash content of water-extracted CSS was 4.55% ± 0.07% (w/w). The elemental composition calculated on a dry and ash-free basis (% w/w) was as follows: C = 49.43 ± 0.40, H = 6.45 ± 0.13, N = 7.41 ± 0.10, O = 36.71 ± 1.31. These values lead to an atomic oxygen-to-carbon ratio of 0.56, which is lower than that obtained for the untreated material [17], confirming that the amount of hydroxyl groups in the treated material decreased. The antioxidant activity of CSS extract as determined by DPPH and ABTS was 3.87 ± 0.23 and 5.82 ± 0.29 mg TE/g dw , respectively. These two methods are based on different reactions, thus leading to different values [30]. The antioxidant capacity of CSS can be mainly ascribed to the presence of polyphenols. Other bioactive compounds can contribute to the antioxidant capacity of CSS, such as melanoidins, which are formed during the roasting of green coffee beans as a product of the Maillard reaction [31]. Derivative thermograms (DTG) under nitrogen atmosphere for untreated, treated, and silanemodified coffee silverskin are reported in Figure 1a, while the FTIR spectra of as-received, treated, and silane-modified CSS are reported in Figure 1b. A typical three-step thermal degradation occurred: the first one, up to about 130-135 • C, can be related to moisture removal, after that the maximum degradation rate, due to hemicellulose and cellulose decomposition, occurred at 336.5, 334, and 357 • C, respectively, for CSS, CSS_T, and CSS_S. Comparing the degradation rates of CSS and CSS_T, the latter was found to be more thermally stable, reflecting that less damaged hemicelluloses existed, pyrolysed at higher temperature [32]. In line with previous results on thermal stability of silane-treated natural fibers, silane treatment improved the thermal stability of the fiber by removing pectin and wax and exposing higher amount of cellulose [33]. As regards the morphology of as-received and extracted CSS, no substantial differences can be noted from micrographs obtained by scanning electron microscopy ( Figure 2). It is evident that the flaky structure typical of coffee silverskin [17] has been partially dismantled and its fibrous structure resulted in the occurrence of several isolated fibers with diameters of tens of microns, further FTIR spectrum of CSS reported in Figure 1b exhibited the typical absorption bands of lignocellulosic materials, as already observed in [17]. While no substantial differences were found in the case of CSS_T, the silane treatment removed pectin and wax, as attested by the absence of peaks at 1731 and 1253 cm −1 . The broad band at 3354 cm −1 is characteristic of O-H stretching for hydroxyl groups in polysaccharide chains. It became less intense for the extracted CSS, confirming the results of elemental analysis. Similarly, as a result of the silane treatment, the hydrophilic character of the CSS was reduced and the water content decreased. Water also shows a bending vibration at around 1650 cm −1 , that was less intense (or completely absent in the case of the signal at 1545 cm −1 ) in silane-treated CSS [34]. As regards the morphology of as-received and extracted CSS, no substantial differences can be noted from micrographs obtained by scanning electron microscopy ( Figure 2). It is evident that the flaky structure typical of coffee silverskin [17] has been partially dismantled and its fibrous structure resulted in the occurrence of several isolated fibers with diameters of tens of microns, further supporting its use as reinforcement in polymer composites. As regards the morphology of as-received and extracted CSS, no substantial differences can be noted from micrographs obtained by scanning electron microscopy ( Figure 2). It is evident that the flaky structure typical of coffee silverskin [17] has been partially dismantled and its fibrous structure resulted in the occurrence of several isolated fibers with diameters of tens of microns, further supporting its use as reinforcement in polymer composites. Tensile Properties of Composites With regard to the tensile properties, the incorporation of CSS into PBAT/PHBV blend produced a substantial increase of the elastic modulus values, as can be inferred from the stress vs. strain curves shown in Figure 3 and the corresponding mechanical properties summarized in Table 2. With regard to the tensile properties, the incorporation of CSS into PBAT/PHBV blend produced a substantial increase of the elastic modulus values, as can be inferred from the stress vs. strain curves shown in Figure 3 and the corresponding mechanical properties summarized in Table 2. While this behavior is rather common due to the stiff lignocellulosic fillers that prevent molecular mobility of polymer chains, interestingly, the tensile strength of neat blend was not degraded, indeed was slightly improved by the addition of CSS, even if composites with a content of 30 percent by weight appear to have reached a kind of saturation, likely due to a non-optimal distribution of the fillers in the molten polymer during compounding. As already commented on in [17], the tensile strength remained almost even after the incorporation of up to 30 wt % of untreated coffee silverskin while, as expected, modulus of the composites increased progressively with increasing filler content. In particular, the values of elastic moduli reached 3.7 and 4.4 times the reference (neat blend) in the case, respectively, of Blend_30CSST_S and Blend_30CSS_T systems, whereas the improvement for untreated and MLO modified blend was 3.3 and 3.1 times the neat matrix. This result is widely over the performance of a previous studied PP based system containing CSS [10], where both a limited increase of elastic modulus and a general decrease of tensile strength were observed. In our case, all the produced blends with the different CSS at distinct content and modifications showed a general, even limited, increase. While this behavior is rather common due to the stiff lignocellulosic fillers that prevent molecular mobility of polymer chains, interestingly, the tensile strength of neat blend was not degraded, indeed was slightly improved by the addition of CSS, even if composites with a content of 30 percent by weight appear to have reached a kind of saturation, likely due to a non-optimal distribution of the fillers in the molten polymer during compounding. As already commented on in [17], the tensile strength remained almost even after the incorporation of up to 30 wt % of untreated coffee silverskin while, as expected, modulus of the composites increased progressively with increasing filler content. In particular, the values of elastic moduli reached 3.7 and 4.4 times the reference (neat blend) in the case, respectively, of Blend_30CSST_S and Blend_30CSS_T systems, whereas the improvement for untreated and MLO modified blend was 3.3 and 3.1 times the neat matrix. This result is widely over the performance of a previous studied PP based system containing CSS [10], where both a limited increase of elastic modulus and a general decrease of tensile strength were observed. In our case, all the produced blends with the different CSS at distinct content and modifications showed a general, even limited, increase. On the other hand, extracted CSS is more effective than as-received CSS, which is to be ascribed to a sounder interfacial adhesion, as confirmed by SEM investigation of the fracture surfaces shown in Figure 4a,b. Composites with untreated CSS (Figure 4a) exhibited a poor interfacial adhesion with extensive fiber pull-out and debonding at filler/matrix interface, while the extracted CSS fibers/particles (Figure 4b) appear to be more embedded within the matrix with the presence of polymer ligaments connecting them to the matrix. This different behavior can be ascribed to the removal of water-soluble components during the extraction process. However, the injection-molded composites also became more brittle as the elongation at break was severely impaired by the increasing content of CSS, irrespective of surface modification. Neat blend, as previously reported [17], is characterized by mixed mode fracture morphology, with both ductile and relatively brittle zones due to the different deformation behavior of the single constituents. In composite materials, a limited ductility can be only observed at the micro scale (Figure 4a,b). to a sounder interfacial adhesion, as confirmed by SEM investigation of the fracture surfaces shown in Figure 4 a,b. Composites with untreated CSS (Figure 4a) exhibited a poor interfacial adhesion with extensive fiber pull-out and debonding at filler/matrix interface, while the extracted CSS fibers/particles (Figure 4b) appear to be more embedded within the matrix with the presence of polymer ligaments connecting them to the matrix. This different behavior can be ascribed to the removal of water-soluble components during the extraction process. However, the injection-molded composites also became more brittle as the elongation at break was severely impaired by the increasing content of CSS, irrespective of surface modification. Neat blend, as previously reported [17], is characterized by mixed mode fracture morphology, with both ductile and relatively brittle zones due to the different deformation behavior of the single constituents. In composite materials, a limited ductility can be only observed at the micro scale (Figure 4 a,b). These mechanical results are somewhat different from the usual ones regarding the mechanical response of agricultural residues when added to several polymer matrices [21,[35][36][37][38], where lower tensile strength compared to the neat matrix is generally reported in not-compatibilized composite systems. With regard to the effect of compatibilizers, maleinized linseed oil was found not to improve the overall mechanical behavior, including the elongation at break ( Figure 5 and Table 2 These mechanical results are somewhat different from the usual ones regarding the mechanical response of agricultural residues when added to several polymer matrices [21,[35][36][37][38], where lower tensile strength compared to the neat matrix is generally reported in not-compatibilized composite systems. With regard to the effect of compatibilizers, maleinized linseed oil was found not to improve the overall mechanical behavior, including the elongation at break ( Figure 5 and Table 2). MLO and other vegetable oils have been reported as effective plasticizers in PLA and its blends with TPS, PCL, PHB [18,19,[39][40][41], allowing chain motion and improving their processing conditions. In all these systems, a substantial decrease in strength and stiffness is usually combined with a marked increase in ductility. In the present case, even for the blend modified with MLO, this decrease in strength properties was not accompanied by an improved ductility as confirmed also by the observation of fracture surface (Figure 6), which showed features similar to the ones of neat blend, i.e. the coexistence Polymers 2018, 10, 1256 9 of 16 of ductile and brittle areas. It is reasonable to assume that MLO is not particularly effective in the case of PBAT/PHBV blend, considering that its efficiency on poly (3-hydroxybutyrate) is reported to be not as high as that for PLA [39]. MLO and other vegetable oils have been reported as effective plasticizers in PLA and its blends with TPS, PCL, PHB [18,19,[39][40][41], allowing chain motion and improving their processing conditions. In all these systems, a substantial decrease in strength and stiffness is usually combined with a marked increase in ductility. In the present case, even for the blend modified with MLO, this decrease in strength properties was not accompanied by an improved ductility as confirmed also by the observation of fracture surface (Figure 6), which showed features similar to the ones of neat blend, i.e. the coexistence of ductile and brittle areas. It is reasonable to assume that MLO is not particularly effective in the case of PBAT/PHBV blend, considering that its efficiency on poly (3-hydroxybutyrate) is reported to be not as high as that for PLA [39]. The presence of extracted CSS counteracted this low efficiency resulting in composites with mechanical properties higher (especially for modulus) than the neat blend, partially ascribed to a better interfacial compatibility, as can be inferred from SEM micrographs (Figure 7). The presence of extracted CSS counteracted this low efficiency resulting in composites with mechanical properties higher (especially for modulus) than the neat blend, partially ascribed to a better interfacial compatibility, as can be inferred from SEM micrographs (Figure 7). The presence of extracted CSS counteracted this low efficiency resulting in composites with mechanical properties higher (especially for modulus) than the neat blend, partially ascribed to a better interfacial compatibility, as can be inferred from SEM micrographs (Figure 7). In fact MLO, with its highly reactive maleic anhydride functionality, can potentially provide a compatibilization effect with CSS, due to the formation of ester bonds between the multiple MAH groups and the hydroxyl groups of CSS. In the present case, this effect seems to be present but limited by the fact that MLO was added in formulations based on extracted CSS, characterized by a reduced amount of hydroxyl groups. Interestingly, the modification of CSS with silane provided the best combination of resistance properties and ductility ( Figure 5 and Table 2). Silane induced an improved interfacial adhesion [42], as can be seen in Figure 8, which allowed for an efficient stress transfer between the polymer and the filler with occurrence of filler breakage. The increased ductility can be related to a better filler wettability and dispersion achieved through compatibilization [43], thus reducing the presence of filler clusters that can act as points of stress concentration. In fact MLO, with its highly reactive maleic anhydride functionality, can potentially provide a compatibilization effect with CSS, due to the formation of ester bonds between the multiple MAH groups and the hydroxyl groups of CSS. In the present case, this effect seems to be present but limited by the fact that MLO was added in formulations based on extracted CSS, characterized by a reduced amount of hydroxyl groups. Interestingly, the modification of CSS with silane provided the best combination of resistance properties and ductility ( Figure 5 and Table 2). Silane induced an improved interfacial adhesion [42], as can be seen in Figure 8, which allowed for an efficient stress transfer between the polymer and the filler with occurrence of filler breakage. The increased ductility can be related to a better filler wettability and dispersion achieved through compatibilization [43], thus reducing the presence of filler clusters that can act as points of stress concentration. Thermal properties of composites The thermal properties of PBAT/PHBV blend-based formulations were analyzed, with the aim of investigating the effect of the incorporation of as received, treated and silane-modified CSS into PBAT/PHBV blends. Specifically, DSC cooling and heating scans were performed to evaluate the effect of different components on crystallization and melting profile. Table 3 summarizes the data obtained from the cooling and heating scans for all the produced formulations. Figure 9 shows the DSC thermograms for neat PBAT/PBVH and Blend_10CSS-based composites (Figure 9, Panel A) and for neat PBAT/PBVH and Blend_CSST_M -based composites (Figure 9, Panel B) during cooling (a) and second heating (b) scans. In the cooling scan measurements, it is possible to note that a slight variation of crystallization temperature relative to PBAT component (TcPBAT) and a negligible variation regarding the crystallization temperature of PHBV component (TcPHBV) were measured for Blend_CSS_N based composites. Furthermore, thermal characterization studies showed a higher crystallization Thermal properties of composites The thermal properties of PBAT/PHBV blend-based formulations were analyzed, with the aim of investigating the effect of the incorporation of as received, treated and silane-modified CSS into PBAT/PHBV blends. Specifically, DSC cooling and heating scans were performed to evaluate the effect of different components on crystallization and melting profile. Table 3 summarizes the data obtained from the cooling and heating scans for all the produced formulations. Figure 9 shows the DSC thermograms for neat PBAT/PBVH and Blend_10CSS-based composites (Figure 9, Panel A) and for neat PBAT/PBVH and Blend_CSST_M -based composites (Figure 9, Panel B) during cooling (a) and second heating (b) scans. Conclusions In the present study, the use of coffee silverskin in as-received state (CSS_N), after polyphenols extraction (CSS_T) and silane treatment (CSS_S) as suitable filler for PBAT/PHBV blends was investigated. Furthermore, the plasticizing effect of a maleinized linseed oil (MLO) was considered as a possible solution for tuning the mechanical response of composites at three different CSS amounts (10, 20 and 30 wt %). The resulting composites, manufactured by a common melt blending process, exhibited a reasonable combination of thermal and mechanical properties. In details, the incorporation of CSS into PBAT/PHBV blend produced a substantial increase in the elastic modulus values: extracted CSS and silane-treated CSS provided the best combination of resistance properties and ductility, due to a sounder interfacial adhesion, while MLO provided a limited compatibilization effect with CSS, due to the reduced amount of hydroxyl groups on CSS after extraction. Silane treatment for CSS was also responsible for a reduced macromolecular flexibility and mobility upon increasing temperature for the blend, on the other hand small variations in the Tg values were detected in the presence of MLO when combined with treated and silane-modified CSS, suggesting that the effects of silane modification were more evident than the introduction of plasticizing agent. The improvement of interface compatibility between CSS and biopolymers is indeed an issue of fundamental importance to fully exploit the potential of such composites in different applications. In the cooling scan measurements, it is possible to note that a slight variation of crystallization temperature relative to PBAT component (T cPBAT ) and a negligible variation regarding the crystallization temperature of PHBV component (T cPHBV ) were measured for Blend_CSS_N based composites. Furthermore, thermal characterization studies showed a higher crystallization temperature (T cPBAT ) in Blend_CSST_M based composites with increasing content of CSST_M component. It is reasonable to observe and assume that MLO modified the thermal properties of the neat blend. Figure 9b (Panel A and Panel B) shows that neat blend and its composites are characterized by the presence of double crystallization peaks located nearly at the same temperature, this phenomenon underlines that the presence of different CSS components did not affect the size and the lamellar structures achieved during the crystallization process of the realized biocomposites [17,44]. Neat blend and biocomposites exhibited cold crystallization peaks, indicating that there were present amorphous areas able to crystallize [17], and this behavior was also observed for other biobased polymers [45,46]. A slight decrease of cold crystallization temperature (T cc ) was registered for Blend_CSST_S based formulations with respect to the neat blend and other composites, while a slight increase of T cc was observed for Blend_CSST_M systems. In addition, the cold crystallization enthalpy (∆H cc ) values decrease with increasing amount of CSS in biocomposites, as reported in Table 3 and Figure 9 (Panel A and Panel B). The lower value of ∆H cc was measured for Blend_30CSST_S, this effect indicated that CSS based systems presented reduced amorphous regions able to crystallize in second heating scan. This means that silane treatment for CSS was responsible of a reduced macromolecular flexibility and mobility upon increasing temperature for the blend, as a result of increased interfacial adhesion with surface treated CSS. Nevertheless, small variations for the T g values have been detected in presence of MLO when combined with treated and silane-modified CSS: neat Blend exhibited a T g value of 42.4 • C, which slightly decreased to 40.7 • C for Blend MLO. The addition of increasing content of CSS_T in MLO modified blends did not essentially change the T g values (from 40.1 to 39.3 • C, for Blend_10CSST_M and Blend_30CSST_M, respectively), meaning that the consequence of silane modification was more effective that the introduction of plasticizing agent. No particular variations were registered for the melting temperatures of PBAT and PHBV components using the different CSS and amount with respect to the neat blend. The melting peaks of PBAT (melting enthalpy ∆H m PBAT ) in composite formulations (Table 3 and Figure 9 (Panel A and Panel B)) showed decreased values with respect to neat blend, while the melting enthalpy of PHBV was not considerably affected [17]. It is concluded that the nucleation of crystals that melt at high temperature is prevented by the filler, while the development of crystals able to melt at lower temperature is favored. This behavior depends on the effect that CSS has on crystallization: it enables the crystallization of the low melting component, acting as nucleating agent, but it does not affect the crystallization behavior of high melting phase. However, it has been measured that the incorporation of CSS in the selected biobased polymeric blend (PHBV/PBAT) determines an increase of the degree of crystallinity (X c ) over the range of different produced and characterized formulations (Table 3), in accordance with other research activities dealing with the use of recycled wood materials, nanoclays and coffee silverskin [17,24,25]. The crystallization phenomenon/behavior can be also correlated, in combination with the higher aspect ratio, to the reported results on strength properties and Young's modulus of neat blend and biocomposites with different typologies of CSS and content (Figure 3, Figure 5 and Table 2): the modification of CSS with silane represented the best combination of tensile strength and ductility ( Figure 5 and Table 2), in parallel with the higher values for X c . Conclusions In the present study, the use of coffee silverskin in as-received state (CSS_N), after polyphenols extraction (CSS_T) and silane treatment (CSS_S) as suitable filler for PBAT/PHBV blends was investigated. Furthermore, the plasticizing effect of a maleinized linseed oil (MLO) was considered as a possible solution for tuning the mechanical response of composites at three different CSS amounts (10, 20 and 30 wt %). The resulting composites, manufactured by a common melt blending process, exhibited a reasonable combination of thermal and mechanical properties. In details, the incorporation of CSS into PBAT/PHBV blend produced a substantial increase in the elastic modulus values: extracted CSS and silane-treated CSS provided the best combination of resistance properties and ductility, due to a sounder interfacial adhesion, while MLO provided a limited compatibilization effect with CSS, due to the reduced amount of hydroxyl groups on CSS after extraction. Silane treatment for CSS was also responsible for a reduced macromolecular flexibility and mobility upon increasing temperature for the blend, on the other hand small variations in the T g values were detected in the presence of MLO when combined with treated and silane-modified CSS, suggesting that the effects of silane modification were more evident than the introduction of plasticizing agent. The improvement of interface compatibility between CSS and biopolymers is indeed an issue of fundamental importance to fully exploit the potential of such composites in different applications.
2019-04-05T01:20:01.848Z
2018-11-01T00:00:00.000
{ "year": 2018, "sha1": "726d01fac5ea66d782f5fce8a5f094a16f641ac1", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4360/10/11/1256/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "726d01fac5ea66d782f5fce8a5f094a16f641ac1", "s2fieldsofstudy": [ "Materials Science", "Environmental Science" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
266219545
pes2o/s2orc
v3-fos-license
Changing Trends in School Absenteeism and Identification of Associated Factors in Adolescents with Atopic Dermatitis Atopic dermatitis (AD) has a negative influence on school attendance. We aimed to identify factors associated with school absenteeism in adolescents with AD. We used data from the 3rd to 11th annual Korean Youth Risk Behavior Web-based Survey completed from 2007 to 2015. Survey data were obtained from a stratified, multistage, clustered sample. Participants responded to the question “have you ever been diagnosed with AD?” Factors associated with AD-related school absenteeism (ADSA), which is defined as at least one school absence due to AD, were evaluated. Among the 141,899 subjects, the prevalence of AD increased (17.3% to 24.2%), while that of ADSA decreased (7.3% to 2.6%) from 2007 to 2015. Compared to adolescents without ADSA, those with ADSA were more likely to be male, middle school students, and have negative mental health states, including suicidality. In the multivariate logistic regression model, the association of sleep dissatisfaction and depression with ADSA was high (adjusted odds ratio, 6.12; 95% confidence intervals, 4.61–7.95; and 5.44; 5.23–5.67, respectively). The prevalence of ADSA has decreased despite an increase in the prevalence of AD in Korean adolescents; however, it is important for pediatricians to screen for factors associated with ADSA to improve school attendance in adolescents with AD. Introduction Atopic dermatitis (AD) is a chronically relapsing skin disease present in up to 20% of children and up to 3% of adults [1,2].Marked by severe itching, AD is a chronic inflammatory skin disease prevalent in early childhood, often diminishing with age.However, its persistence into adolescence or late onset is linked to higher severity [3].The etiology and pathogenesis of AD are not fully understood, but current understanding suggests a combination of impaired skin barrier function, immune dysregulation, and dysbiosis of the skin or gut microbiome [4].The clinical diagnosis of AD involves assessing pruritus, characteristic skin lesions, a chronic clinical course, and personal or family history of allergic diseases, with differential diagnosis considering conditions such as contact dermatitis, seborrheic dermatitis, and psoriasis [5,6].Treatment for atopic dermatitis includes skincare involving cleaning and moisturizing, environmental control, and medical therapy using topical corticosteroids or immunosuppressants, with recent advancements focusing on individualized approaches based on distinct phenotypes and endotypes [6].In addition to the high prevalence and deleterious clinical effects, subjects with AD are most likely to suffer considerable poor attendance or performance at work or school related to its negative impact on children's sleep and behavior [2,[7][8][9][10][11][12].In the previous study, 32% of participants believed that AD affected their school or work life and reported an average of 2.5 days of absenteeism from schoolwork because of an AD flare [13].In the United States (US), adults with AD were significantly more likely to have lost ≥6 workdays compared This study performed a secondary analysis utilizing data obtained from the 3rd through 11th Korean Youth Risk Behavior Web-based Surveys (KYRBSs), conducted over 9 years from 2007 to 2015 [23,24].The KYRBS initiative was established in 2002 as a collaborative effort of the Korean Ministry of Education (MOE), Ministry of Health and Welfare, and the Centers for Disease Control and Prevention (KCDC) [23,24].To qualify the surveys, the understanding, reliability, and validity of each question were examined every year by the KCDC.The reliability estimates for the KYRBSs questionnaire have been rated as having good validity [23]. The KYRBS is an annual cross-sectional online survey designed to evaluated various health-risk behaviors among Korean adolescents aged 12-18 years, covering students from the first year of middle school to the third year of high school in the Korean education system.The Institutional Review Board (IRB) of the KCDC approved this research for every year of the survey.The KYRBS utilized a three-stage stratified random cluster sampling method to achieve a nationally representative sample.In the stratification stage, the participants were stratified according to geographic region and school type (public or Children 2023, 10, 1918 3 of 13 private, coeducation, and vocational school) to minimize sampling errors.In the sample allocation, approximately 75,000 students from 400 middle schools and 400 high schools were selected by proportional sampling to match the study population.In the stratified cluster sampling, one class from each grade was selected using stratified cluster extraction from the selected schools. Written informed consent from parents or legal guardians was submitted before participating in the survey.Survey participation excluded age-eligible respondents with absenteeism, including dropout or expulsion, those with special needs (such as development disabilities), and those with dyslexia.Students voluntarily participated at their schools' computer laboratories using their randomly assigned and unique identification numbers.After logging on to the website of KYRBS, participants had to answer each question; nonresponses were not accepted.However, certain data were considered missing due to logical errors or were considered outliers.The reasons for nonparticipation were not specified; however, adolescents had the option to withdraw or choose not to complete the surveys or discuss the assent process.Of the 685,710 targeted adolescents, 660,607 were included in the analysis. Atopic Dermatitis and School Absenteeism Lifelong diagnosis of AD was assessed by answering "Yes" to the following question: "Have you ever been diagnosed with AD by a doctor at any stage in your life?"The severity of current symptoms and modality of treatment were not evaluated.Our primary outcome of interest was school absenteeism due to AD (hereafter referred to as "AD-related school absenteeism [ADSA]", which was defined as being absent for at least one day in our study). Participants were asked the following the question: "Within the past 12 months, about how many days of school did you miss due to your AD?" Their responses were classified into four categories for our study: no absences, 1-3 days, 4-6 days, and ≥7 days. Demographic and Socioeconomic Characteristics Information on sex, school grade, perceived socioeconomic status (SES), and academic achievements was assessed.Participants were categorized into middle school and high school.The degree of SES and academic achievement were evaluated by the following questions: "During the past 12 months, how would you subjectively rate your SES and academic performance, respectively?"Responses were categorized into high (high or middle-high), middle (middle), and low (middle-low or low). Dietary, Smoking, and Drinking Behaviors Breakfast consumption frequencies were investigated with the question, "In the past 7 days, how many days did you eat breakfast?"Participants could choose from the option "never, 1, 2, 3, 4, 5, 6, or 7 days".Individuals who reported eating breakfast less than twice a week were classified as 'skipped breakfast' in accordance with the criteria established by the survey.Current smoking status at the time of the study was assessed by a reply of "more than 1 day over the past month" to the question "Do you smoke?"For alcohol consumption status, current drinkers were indicated by a reply of "More than 1 day" to the question "How many days did you drink at least one-shot glass of alcohol in the month preceding this survey?" Emotional States and Suicidality Subjective health was assessed with the question, "How healthy do you usually feel?", with response categories including healthy, average, and unhealthy.Perceived happiness was assessed by the question, "How happy do you usually feel?", and their responses were classified as happy, average, or unhappy.Sleep satisfaction levels were assessed by the question, "How satisfied are you with your sleep during the last week?",and the responses were categorized into three categories: enough, average, or not enough.Additionally, perceived stress status was also evaluated through the question, "To what degree are you usually stressed?",and the replies were classified as follows: high, average, and low.Finally, depressive mood, suicide ideation, and suicide attempts were assessed by the following questions: "Within the last year, did you feel sad, blue, or depressed, resulting in a cessation of your usual activities almost every day for two weeks or longer?";"Within the last year, have you ever seriously considered of committing suicide?"; and "Have you ever attempted suicide?" Binary responses (yes or no) were recorded. Statistical Analysis All statistical analyses were conducted utilizing the complex sample procedures available in the Statistical Package for the Social Sciences (SPSS) software program version 21.0 (IBM Corp., Armonk, NY, USA).Given that the KYRBS data were gathered through a representative, stratified, and clustered sampling method, data were weighted to account for the sample design.The chi-squared test was used for the categorical variables and independent t-tests were used for the continuous variables in order to compare the general characteristics between adolescents with and without ADSA.To identify the factors associated with ADSA, we performed a multivariable logistic regression analysis with the addition of several variables.The results were expressed as adjusted ORs (aORs) and 95% confidence intervals (CIs).Statistical significance was indicated by p < 0.05 in all tests. Changing Trends in the Rates of School Absenteeism among Adolescents with Atopic Dermatitis During the 9 years, 660,607 subjects completed the survey, with a total response rate of 96.4% (range from 94.8% to 97.7%, Table 1).Among the subjects with AD (n = 141,899), there were 5468 (4.0%) subjects with ADSA (Table 2).While the prevalence of AD in adolescents increased (from 17.3% in 2007 to 24.2% in 2015, Table 1 and Figure 1), the prevalence of ADSA decreased during the survey years (7.3% in 2007, 2.6% in 2015, Table 1 and Figure 2).The prevalence rates of all three of the groups according to their durations of school absenteeism among participants with ADSA also decreased during the survey years (Table 1 and Figure 2). Differences According to School Absenteeism among Adolescents with Atopic Dermatitis Those with ADSA were more likely to be male, middle school-grade students, have lower SES, struggle with academics, frequently skip breakfast, often drink alcohol, and currently smoke.They also reported feeling less healthy and less happy, along with experiencing higher levels of stress, dissatisfaction with sleep, depression, thoughts of suicide, and actual suicide attempts compared to those without ADSA.(Table 2). Differences According to the Duration of the School Absences among Adolescents with Atopic Dermatitis-Related School Absenteeism A total of 65.7% of the students were absent for 1-3 days (1-3-day group), 20.5% were absent for more than 7 days (≥7 day group), and 13.8% were absent for 4-6 days (4-6 day group) (Table 3).The proportion of female students was highest in the 1-3-day group; however, the proportion of male students was highest among the 4-6-day group.The participants that missed more than seven days of school were more likely to have low SES and low academic achievements.They were also more likely to skip breakfast, frequently drink alcohol, be current smokers, and report more negative mental health variables except for depression (participants in the 4-6-day group had the highest proportion of depression).Additionally, they were more likely to engage in suicidal ideation and attempt suicide than those in the 1-3-day and 4-6-day groups.Survey data underwent weighting to ensure statistical representation of the general population, aligning with the sample design.The chi-squared test was employed to assess statistical disparities among categorical data, while the independent t-test was utilized for continuous variables. Associated Factors for School Absenteeism in Individuals with Atopic Dermatitis After adjustment, both being male and attending middle school were associated with ADSA (Table 4).High and low SES, breakfast skipping, alcohol consumption, and smoking were also associated with ADSA.Among the mental health variables, sleep dissatisfaction was the most strongly associated (aOR: 6.12, 95% CI: 4.61-7.95),and then depression was the second most significantly associated (aOR: 5.44, 95% CI: 5.23-5.67).In addition, suicidal ideation and suicide attempt were also significantly associated (aOR 3.12, 95% CI: 1.93-4.13,aOR 2.30, [95% CI; 2.04-2.60],respectively).Following the selection of significant covariates, univariate and multivariate logistic regression analyses were performed to identify the associated factors for school absenteeism in adolescents with atopic dermatitis.This yielded odds ratios (OR), adjusted odds ratios (aORs), and 95% confidence intervals (CIs).Following the selection of significant covariates, univariate and multivariate logistic regressio yses were performed to identify the associated factors for school absenteeism in adolescen atopic dermatitis.This yielded odds ratios (OR), adjusted odds ratios (aORs), and 95% confi intervals (CIs). Discussion The main finding of this study was that adolescents with ADSA were more lik engage in problematic behaviors and to report more negative mental health states, in ing suicidality, than those without ADSA, which was consistent with our hypo Among several associated factors, sleep dissatisfaction and depression were most str associated with ADSA.Another notable finding of our study was that while the lence of AD increased, the prevalence of ADSA decreased during the survey year hough the causal relationship-whether the severity of AD has become milder or ment has been more effective during survey-cannot be distinctly explained due study design, this finding is likely to be interpretable in several ways.First, from th spective of public education, the effect of the Wee class project can be considered.initiated in 2008 by the MOE to provide comprehensive support services of psycho to students, including counseling for depression, suicidality, bullying, and low scho formance [25].Second, the manuals containing step-by-step response strategies fo school absenteeism (regardless of the type) were produced by the MOE and are used in the field of education [26].Regarding public health, we believe that the fin of decreasing prevalence of ADSA may be the result of efforts to improve the effectiv of AD therapy.For example, since 2010, the local governments of Korea have estab educational and informational centers for the preventing of allergic diseases throu cally representative medical institutions [27].Additionally, well-structured educa programs are being offered in numerous hospitals in Korea, potentially enhancing ment compliance and ultimately improving treatment outcomes [27]. In meta-analyses, some of the risk factors of school absenteeism were related characteristics of the youth (e.g., age, substance abuse, internal and external proble behaviors, and poor physical health), their family (e.g., parental psychiatric problem Discussion The main finding of this study was that adolescents with ADSA were more likely to engage in problematic behaviors and to report more negative mental health states, including suicidality, than those without ADSA, which was consistent with our hypothesis.Among several associated factors, sleep dissatisfaction and depression were most strongly associated with ADSA.Another notable finding of our study was that while the prevalence of AD increased, the prevalence of ADSA decreased during the survey years.Although the causal relationship-whether the severity of AD has become milder or treatment has been more effective during survey-cannot be distinctly explained due to the study design, this finding is likely to be interpretable in several ways.First, from the perspective of public education, the effect of the Wee class project can be considered.It was initiated in 2008 by the MOE to provide comprehensive support services of psychologists to students, including counseling for depression, suicidality, bullying, and low school performance [25].Second, the manuals containing step-by-step response strategies for any school absenteeism (regardless of the type) were produced by the MOE and are being used in the field of education [26].Regarding public health, we believe that the findings of decreasing prevalence of ADSA may be the result of efforts to improve the effectiveness of AD therapy.For example, since 2010, the local governments of Korea have established educational and informational centers for the preventing of allergic diseases through locally representative medical institutions [27].Additionally, well-structured educational programs are being offered in numerous hospitals in Korea, potentially enhancing treatment compliance and ultimately improving treatment outcomes [27]. In meta-analyses, some of the risk factors of school absenteeism were related to the characteristics of the youth (e.g., age, substance abuse, internal and external problematic behaviors, and poor physical health), their family (e.g., parental psychiatric problems and unemployment, low SES, history of child abuse, and family breakup), their school (e.g., poor teacher quality), or their peer group (e.g., antisocial or delinquent peers) [19][20][21].In particular, from the point of view that emotional disorders are identified as leading contributors to the burden of disease in adolescents [28,29], many studies have shown a relationship between emotional disorders and school absenteeism in the general adolescent population [28][29][30]. Similarly, our study identified several mental health variables as being significantly associated with ADSA.AD burdens adolescents in multiple ways [2,7,8,11,12].Embarrassment due to disfigurement leads to reduced self-esteem, social stigmatization, and social isolation, thus causing stress in social relationships and bullying [8,31,32].Bullying is another important and established cause of school absenteeism [18], and another of these is impairment of sleep.The children with AD are noted to have poor sleep efficiency and more daytime sleepiness due to frequent night wakening [12,33].This exhaustion from sleep impairment leads to poor concentration at school [9,12,13].These psychological stresses and AD symptoms, especially itching, form a vicious cycle; AD may be worsened by emotional stress, and patients with AD are more prone to express psychosomatic symptoms than normal controls [34].Associations between depression and AD in adolescents are well known [31,34,35].Emotional disorders, especially depression, in adolescents with AD may lead to school absenteeism through social withdrawal, loss of motivation, sleep disturbances, and low energy as in the general adolescent population [30]. In our study, those with ADSA were predominantly male adolescents, and being male was a significant factor associated with ADSA.This is similar to a finding of a previous study revealing that one of the main factors associated with school absenteeism is being male [36].However, this is something to consider from the perspective of the association between sex predominance and AD.Sex is a biological variable that should be considered in immunological studies; it is well demonstrated that respiratory allergies, especially asthma, are prevalent in male students during childhood, while it is more frequent in female students from adolescence to adulthood [37].However, no sex differences were found for exercise-induced anaphylaxis [37].In terms of AD and food allergies, there were conflicting results, with male adolescents more commonly having insect venom allergies and female adolescents more commonly having drug allergies [37].Although interventions to reduce school absenteeism should pay special attention to male adolescents with AD based on the results of our study, further well-controlled prospective studies are necessary to confirm these results. Lastly, in the present study, although the statistical significance was relatively small, adolescents with ADSA were predominantly middle school students; following the logistic regression analysis, middle school enrollment was significantly associated with ADSA.Although it was not known whether the AD symptoms of middle school students were more severe than those of high school students in our study, these findings may be because middle school students are more vulnerable than high school students to psychosocial changes and lack the ability to control their emotional instability [28][29][30]. This study had some limitations.Many of its limitations are related to those associated with secondary analysis studies that are based on self-reported questionnaires.First, because this study was cross-sectional in design, causal relationships between associated factors and ADSA are not identifiable.In particular, the causes of the increasing prevalence of AD but decreasing proportions of ADSA could not be explained, as mentioned earlier, and the aforementioned well-known risk factors for school absenteeism could not be assessed [19,20].Well-controlled prospective studies are necessary to overcome this limitation.Second, in the KYRBSs, school absenteeism was not addressed in adolescents without AD; therefore, we could not compare the differences in school absenteeism and associated factors among adolescents with and without AD, which is a major limitation.However, because it is well known that adolescents with AD are more likely to be absent from school compared to their healthy peers [2,7,8,11,12], we focused on which factors are involved in school absences of adolescents with AD.Third, the surveys used to collect data did not encompass adolescents who had either dropped out of school or were subject to expulsion.In addition, if the nonparticipants were more prone to having Atopic Dermatitis (AD), and the primary reasons for nonparticipation were related to AD conditions, the estimates could be somewhat biased.Fourth, we evaluated only the lifelong diagnosis of AD regardless of the subjects' current symptoms; therefore, we could not separate currently active AD from previous (but treated or well-controlled) AD.We could not analyze presence, modality and failure of treatment, current symptoms, and severity because these data were not available from the survey.This led to the misclassification of some AD cases; if these data were available, some conclusions would probably change.Despite these limitations, this study had several strengths that improve on the findings of previous reports of school absenteeism in adolescents with AD.To the best of our knowledge, factors associated with ADSA in adolescents have not been explored with a large population dataset, as we did with this study.Also, this study was based on nationwide surveys with high response rates (96.4%).An equal proportion of socioeconomically diverse middle school and high school students were assessed annually, and all analyses in this study were based on sample weights.This can allow for the generalization of the results in Korea.Additionally, this study examined trends in ADSA over time and adopted a comprehensive approach to identify factors associated with ADSA. Conclusions In conclusion, our findings suggest that while the prevalence of AD in adolescents in Korea increased, the prevalence of ADSA decreased during the survey years, with a mean prevalence of 4.0%.However, because subjects with ADSA engage in more problematic behaviors and have more negative mental health statuses than those without ADSA, when assessing adolescents with AD, physicians should inquire about school absences; furthermore, screening procedures for associated factors related to ADSA should be emphasized in order to reduce ADSA.Mental health conditions that interfere with school attendance should be treated with a multidisciplinary team approach, including proper mental health referrals if necessary.Management of AD should include clear expectations about school attendance, and physicians, families, and schools should be key collaborators in interventions to reduce ADSA. Figure 1 . Figure 1.Changing trends in the prevalence of atopic dermatitis. Figure 2 . Figure 2. Changing trends in school absenteeism among adolescents with atopic dermatitis according to the days absent (1-3 days (A), 4-6 days (B), more than 7 days (C), and more than one day as total (D)). Figure 1 . Figure 1.Changing trends in the prevalence of atopic dermatitis. Figure 1 . Figure 1.Changing trends in the prevalence of atopic dermatitis. Figure 2 . Figure 2. Changing trends in school absenteeism among adolescents with atopic derma according to the days absent (1-3 days (A), 4-6 days (B), more than 7 days (C), and more than day as total (D)). Figure 2 . Figure 2. Changing trends in school absenteeism among adolescents with atopic dermatitis according to the days absent (1-3 days (A), 4-6 days (B), more than 7 days (C), and more than one day as total (D)). Figure 3 Figure 3 illustrates a decreasing trend in the prevalence rates of depressive mood, suicidal ideation, and suicide attempts among adolescents with atopic dermatitis throughout the survey (depressive mood: 45.8% in 2007 to 26.5% in 2015; suicidal ideation: 27.9% in 2007 to 13.1% in 2015; suicide attempts: 7.5% in 2007 to 2.7% in 2015). Figure 3 . Figure 3. Changing trends in depressive mood, suicidal ideation, and suicide attempts amon lescents with atopic dermatitis during the survey years. Figure 3 . Figure 3. Changing trends in depressive mood, suicidal ideation, and suicide attempts among adolescents with atopic dermatitis during the survey years. Table 1 . Changing trends in the prevalence of atopic dermatitis and school absenteeism among adolescents with atopic dermatitis in Korea from 2007 to 2015. Table 2 . Differences according to school absenteeism among 141,899 adolescents with atopic dermatitis.Survey data were weighted to ensure statistical representation of the general population according to the sample design.The chi-squared test was employed to assess statistical differences among categorical data, while the independent t-test was utilized for continuous variables. Table 1 . Changing trends in the prevalence of atopic dermatitis and school absenteeism among adolescents with atopic dermatitis in Korea from 2007 to 2015. Table 1 . Changing trends in the prevalence of atopic dermatitis and school absenteeism among olescents with atopic dermatitis in Korea from 2007 to 2015.94.8 Table 3 . Differences according to the duration of school absence among 5468 adolescents with atopic dermatitis-related school absenteeism. Table 4 . Associated factors for school absenteeism among adolescents with atopic dermatitis.
2023-12-15T16:11:49.299Z
2023-12-01T00:00:00.000
{ "year": 2023, "sha1": "84cf3b83611900371724e063b4efae350fc7a5d2", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9067/10/12/1918/pdf?version=1702369138", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2a2b289947dc4306c0e3973309c302beebc7cf9f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
263185784
pes2o/s2orc
v3-fos-license
12/15-lipoxygenase inhibition attenuates neuroinflammation by suppressing inflammasomes Introduction Lipoxygenases (LOXs) have essential roles in stroke, atherosclerosis, diabetes, and hypertension. 12/15-LOX inhibition was shown to reduce infarct size and brain edema in the acute phase of experimental stroke. However, the significance of 12/15-LOX on neuroinflammation, which has an essential role in the pathophysiology of stroke, has not been clarified yet. Methods In this study, ischemia/recanalization (I/R) was performed by occluding the proximal middle cerebral artery (pMCAo) in mice. Either the 12/15-LOX inhibitor (ML351, 50 mg/kg) or its solvent (DMSO) was injected i.p. at recanalization after 1 h of occlusion. Mice were sacrificed at 6, 24, and 72-h after ischemia induction. Infarct volumes were calculated on Nissl-stained sections. Neurological deficit scoring was used for functional analysis. Lipid peroxidation was determined by the MDA assay, and the inflammatory cytokines IL-6, TNF-alpha, IL-1beta, IL-10, and TGF-beta were quantified by ELISA. The inflammasome proteins NLRP1 and NLRP3, 12/15-LOX, and caspase-1 were detected with immunofluorescence staining. Results Infarct volumes, neurological deficit scores, and lipid peroxidation were significantly attenuated in ML351-treated groups at 6, 24, and 72-h. ELISA results revealed that the pro-inflammatory cytokines IL-1beta, IL-6, and TNF-alpha were significantly decreased at 6-h and/or 24-h of I/R, while the anti-inflammatory cytokines IL-10 and TNF-alpha were increased at 24-h or 72-h of ML351 treatment. NLRP1 and NLRP3 immunosignaling were enhanced at three time points after I/R, which were significantly diminished by the ML351 application. Interestingly, NLRP3 immunoreactivity was more pronounced than NLRP1. Hence, we proceeded to study the co-localization of NLRP3 immunoreactivity with 12/15-LOX and caspase-1, which indicated that NLRP3 was co-localized with 12/15-LOX and caspase-1 signaling. Additionally, NLRP3 was found in neurons at all time points but in non-neuronal cells 72 h after I/R. Discussion These results suggest that 12/15-LOX inhibition suppresses ischemia-induced inflammation in the acute and subacute phases of stroke via suppressing inflammasome activation. Understanding the mechanisms underlying lipid peroxidation and its associated pathways, like inflammasome activation, may have broader implications for the treatment of stroke and other neurological diseases characterized by neuroinflammation. Introduction Lipoxygenases (LOXs) are a family of lipid-oxidizing enzymes, which generate eicosanoids and related compounds from arachidonic acid and other polyunsaturated fatty acids.The 12/15-LOX is special in that it can directly oxidize lipid membranes containing polyunsaturated fatty acids, without the preceding action of a phospholipase, leading to the direct attack on organelles.This presumably underlies the cytotoxic activity of 12/15-LOX, which is upregulated in neurons and endothelial cells after stroke (van Leyen et al., 2014;Karatas and Cakir-Aktas, 2019).A major effect of 12/15-LOX on ischemic brain injury is its direct effect on lipid peroxidation, which leads to delayed cell death, blood-brain barrier (BBB) damage, and edema formation (Jin et al., 2008).Recent studies have suggested that 12/15-LOX inhibition may enhance neuroplasticity and promote neuronal survival in the post-stroke period (Yigitkanli et al., 2013).Although it has been known that 12/15-LOX inhibition has reduced infarct size and brain edema in the acute phase of the experimental stroke (Yigitkanli et al., 2013;Karatas et al., 2018), the effect of 12/15-LOX on stroke-induced neuroinflammation has not been clarified yet.Neuroinflammation is the most prominent feature of the ischemic stroke pathology (Eltzschig and Eckle, 2011;Iadecola and Anrather, 2011;Maida et al., 2020).Membrane lipid peroxidation by 12/15-LOX may induce inflammation by activating NLRP3 (Liang et al., 2021).The role of 12/15-LOX in neuroinflammation is complex and multifaceted, involving the production of pro-inflammatory mediators such as leukotrienes and reactive oxygen species.Overall, further research is needed to fully elucidate the role of 12/15-LOX in stroke pathophysiology and determine whether it represents a viable therapeutic target for stroke treatment. Neuron cell death and especially necrotic cell debris strongly trigger inflammation following ischemic stroke (Duris et al., 2018;Puleo et al., 2022).Inflammasomes act as sensors that detect tissue damage.These complex proteins increase the production and release of pro-inflammatory cytokines such as IL-6, IL-1beta, and TNF-alpha in the peripheral tissues (Xu et al., 2021;Puleo et al., 2022).In particular, nucleotide-binding oligomerization domain (NOD)-like receptor (NLR) pyrin domain-containing 1 (NLRP1) and 3 (NLRP3) inflammasomes, are overexpressed in the brain following ischemia (Fann et al., 2018;Jiang et al., 2020;Qiu et al., 2022).After activation of NLRP3, it converts procaspase-1 into a cleaved caspase-1 (Ogura et al., 2006).Then, this activated caspase-1 triggers the inflammatory process by converting pro-IL-1beta and pro-IL 18 to their active forms.Consistently, some studies showed that inhibition of NLRP-3 inflammasome has reduced the ischemia/recanalization (I/R) injury and protected the BBB in both in vivo and in vitro ischemic conditions (Abulafia et al., 2009;Gao et al., 2017;Bellut et al., 2021).Therefore, this study aimed to investigate the role of 12/15-LOX inhibition by a novel and potent 12/15-LOX inhibitor, ML351, on the GRAPHICAL ABSTRACT Effects of 12/15-LOX inhibition by ML351 on the acute and subacute phases of neuroinflammation following I/R.ROS-induced oxidative stress exacerbates the production of oxidized lipids by the 12/15-lipooxygenase enzyme in the ischemic brain.These oxidized lipids stimulate neuroinflammation and contribute to the pathophysiology of I/R.Neuroinflammation is especially driven by NLRP1 and NLRP3 inflammasome protein complexes.The inflammasomes convert procaspase-1 into its active form caspase-1.Caspase-1 facilitates the cleavage of the pro-inflammatory cytokine, IL-1beta, into its active form and leads to the release of IL-1beta from especially neurons at the acute phase of neuroinflammation.Production of other pro-inflammatory cytokines (TNF-alpha, IL-6) is significantly increased at 24 h of stroke which also initiates the synthesis of antiinflammatory cytokines to protect the tissue itself (TGF-beta, IL-10).The 12/15-LOX inhibitor, ML351, suppresses inflammasome activation by reducing lipid peroxidation and eventually decreases infarct volume and neurological deficit score (NDS).Thus, while providing an inhibitory effect on proinflammatory cytokines, it also increases anti-inflammatory cytokines (A graphical abstract was created by Canan Cakir-Aktas and Hulya Karatas). Animal procedure and experimental groups A total of 77 male Swiss Albino mice aged 8-12 weeks, weighing 30-40 grams, were used in the experiments.All experimental procedures were approved by Hacettepe University, Animal Experiments Ethics Committee (Approval No: 2016/27-04).Researchers performing the experiments were blinded to the experimental groups.The mice were randomly divided into three groups: (a) The naive group, which did not undergo ischemia; (b) the 12/15-LOX inhibitor (ML351)-treated group; and (c) the vehicle (DMSO)-treated group.Mice were anesthetized with 4% isoflurane inhalation before surgery and maintained at 1-2% isoflurane during the procedure.The body temperature of the mice was kept at 37 ± 0.2° C with the rectal probe of the homeothermic blanket (Physiosuit, Kent Scientific, United States).Oxygen saturation and heart rate were monitored during the surgery with a pulse oximeter (V3304 Digital Table-Top Pulse Oximeter, Nonin, United States).Ischemia/ recanalization was induced by proximal Middle Cerebral Artery Occlusion (pMCAo) using the intraluminal filament method.Briefly, the common and external carotid arteries were exposed through an incision employed in the middle of the neck and permanently ligated with sutures.The main carotid artery was pushed through a filament leading into the internal carotid artery with an incision made in its middle part, and the rounded end of the filament was pushed until it reached the middle cerebral artery (Park et al., 2014).Regional cerebral blood flow (rCBF) was monitored with a Laser Doppler flowmeter (AD Instruments, New Zealand) by a flexible probe placed on the skull (2 mm posterior, 6 mm lateral to the bregma) to determine whether ischemia was induced and whether successful occlusion was counted if rCBF decreased below 30% compared to the basal condition.Mice were re-anesthetized 1-h after the occlusion, and the filament was removed.Either the newly synthesized 12/15-LOX inhibitor ML351 (50 mg/kg) or its solvent DMSO was injected intraperitoneally at recanalization after 1-h of occlusion (Rai et al., 2014).Mice were sacrificed under high-dose chloral hydrate anesthesia at 6, 24, or 72 h after recanalization. Assessment of functional neurological deficit score The following scoring system was employed for baseline and postoperative neurological examinations (Bederson et al., 1986).According to this grading, 0: no visible neurological damage (normal); 1: inability to extend the right paw (mild); 2: turn to the opposite side (middle); 3: loss of walking or righting reflex (severe). Infarct volume measurement Nissl-staining was used to assess the ischemic brain damage.Mice were perfused transcardially with heparin and 4% paraformaldehyde (PFA) solution sequentially.Then the brains were removed and fixed in 4% PFA overnight at 4°C.Then, they were sectioned into 2-mm-thick coronal slices and embedded in paraffin (n = 3 mice/ group).Each of the blocks was cut into 5 μm thick sections.After deparaffinization, the slides were first placed in 100, 95, and 80% ethanol solutions for 30 s each.Following this, the sections were incubated with 0.1% cresyl violet solution for 3 min at 37°C.Immediately after incubation, they were placed into 95% ethyl alcohol solution for 2 min and the excess staining was removed.Afterward, the sections were covered by an entellan mounting medium following incubation with xylene for 2 min.Images were taken with a light microscope at 1X magnification.The infarct volume was calculated by multiplying five sequential infarcted areas by the section thickness of 2 mm on the posterior surface of each coronal slice (Renolleau et al., 1998;Rousselet et al., 2012;McBride et al., 2015). Analysis of lipid peroxidation (MDA) Lipid peroxidation was measured by TBARS Assay Kit (OxiSelect TBARS Assay Kit, CellBiolabs).The principle of this method is based on the formation of a 1:2 conjugate of Malondialdehyde (MDA) with thiobarbituric acid (TBA).First, the tissue samples were diluted with 5% butyl hydroxytoluene, and 100 μL of sodium dodecyl sulfate (SDS) lysis solution was added to 100 μL of the sample and incubated at room temperature for 5 min.After that, 250 μL TBA solution was added to samples, and the mixtures were incubated at 95°C for 45 min.Thereafter, the mixtures were cooled on ice for 5 min and centrifuged at 10.000 x g for 15 min.Then, 200 μL of the supernatant was taken and transferred to a new tube, and 300 μL butanol was added.The mixture was vortexed at 3000 rpm for 3 min and centrifuged at 10.000 x g for 5 min.The absorbances of the supernatants were read at a wavelength of 532 nm in a microplate reader (SpectraMax M2, Molecular Devices, United States).A standard graph was employed to calculate the results.The results were standardized by dividing the wet tissue weight. Tissue preparation for the biochemical analysis Following model formation, the mice were sacrificed with highdose anesthesia, and the fresh brain tissues were removed.After the removal of the infarcted brain region, the tissues were weighed.A buffer (25 mM Tris buffer (pH 7.4; 2 mM EDTA, 5 mM MgCl 2 , 0.1% Triton X-100)) suitable for the biochemical analyses was used for tissue homogenization.After adding buffer and protease inhibitor (1X) to the tissues, they were homogenized at 10% (w/v) by Ultra-Turrax ® (S8N-5 g, IKA-Werke GmbH) on ice for 3 × 10 s.All these procedures were performed on ice to prevent protein degradation.For MDA analysis, 100 μL samples were aliquoted from each homogenate and stored in a − 80°C freezer.The remaining homogenates were centrifuged at 13.000 g for 15 min at +4°C.After centrifugation, the supernatant was separated from the pellet and used for ELISA. Quantitative analysis of cytokines with enzyme-linked immunosorbent assay (ELISA) Quantitative expression of inflammatory cytokines (IL-6, TNF-alpha, IL-1beta, IL-10, and TGF-beta) in the ischemic brain 10.3389/fncel.2023.1277268Frontiers in Cellular Neuroscience 04 frontiersin.orgregions was detected by the sandwich ELISA method.IL-6 (Biolegend, United States), TNF-alpha (Biolegend, United States), IL-1beta (Thermo Fisher, Germany), IL-10 (Biolegend, United States), and TGF-beta (Biolegend, United States) were quantified with ELISA kits according to the manufacturer's protocols.The levels of cytokines in naive, I/R-DMSO and I/R-ML351-treated brain samples were compared.Three mice in each group were analyzed for ELISA, and each sample was tested in duplicate.The absorbances of the samples were read at a wavelength of 532 nm on a microplate reader (SpectraMax M2, Molecular Devices, United States).The curve of the standard graph was employed to calculate the results.The results were normalized by the amount of total protein, which was determined by BCA assay in the samples (ng/mg protein). Immunofluorescence method For the immunohistochemical staining, mice were transcardially perfused with heparinized saline along with 4% PFA at 6, 24, and 72-h after ischemia (n = 6 mice/group).The brain tissues were carefully removed and fixed in PFA at 4°C overnight.Subsequently, the brains were sectioned in the coronal plane at a thickness of 12 μm, following overnight incubation in a 30% sucrose solution.The tissue sections were incubated in sodium citrate buffer (pH 6) at 80°C for 15 min for antigen retrieval.For the blocking step, they were incubated with normal serum of the associated secondary antibody host (10%) and BSA 1% in TBS for 1 h RT.Next, the sections were incubated with primary antibodies targeting specific proteins, namely NLRP1 [NALP1 Antibody (M-90), Santacruz; 1:100], NLRP3 [anti-NLRP3/NALP3, mAb (Cryo-2), Adipogen; 1:100], 12/15-LOX (kindly provided by Klaus van Leyen) and caspase-1 (Abcam; 1:200), at 4°C overnight.Following the primary antibody incubation, the sections were washed three times with TBS containing 0.025% Triton X-100 (Merck Millipore, 1,086,031,000).Subsequently, the sections were incubated with appropriate Cy-3 or Alexa Fluor 488 conjugated anti-mouse or anti-rabbit IgG secondary antibodies .To employ double immunostaining of NLRP1/NeuN, NLRP3/NeuN, NLRP1/Iba1, NLRP3/S100 beta, 12/15-LOX/ NLRP3, and caspase-1/NLRP3, the blocking step was repeated using a 10% normal serum of the associated secondary antibody host, BSA 1% in TBS for 1-h at RT.Thereafter, the sections were incubated with primary antibodies targeting specific proteins, including NeuN (Millipore; 1:200), Iba-1 (Novus; 1:200), and ALDH1 (Abcam, 1:200) at 4°C overnight.Following the washing step, the sections were incubated with appropriate Cy-3 or Alexa Fluor 488 conjugated secondary antibodies (Jackson ImmunoResearch).Finally, the stained tissues were carefully mounted with a PBS/glycerol medium containing Hoechst 33258 (InvitrogenTM) to visualize the cellular nuclei.For each tissue section, three images were captured from the periinfarct area using a Leica TCS SP8 confocal laser scanning microscope (Leica, Wetzlar, Germany).To quantify the immunofluorescence labeling results, the number of NLRP1 and NLRP3 positive cells was counted by researchers who were blinded to the DMSO and LOX inhibitor treatment groups in the I/R model.The captured images were then analyzed using Image J software (NIH, Bethesda, MD, United States).To ensure accuracy and eliminate bias, the results were standardized by calculating the proportion relative to the counts obtained from naive brain tissue. Statistics Data were presented as mean ± standard error of the mean (S.E.M.).Differences among experimental groups were analyzed by student's t-test or ANOVA followed by Tukey's post hoc test.Non-normally distributed data of groups were compared using the Kruskal-Wallis test, and the Mann-Whitney U test was used for two-group comparisons as a post hoc test.A p-value ≤0.05 was considered statistically significant.Statistical analyses were carried out using GraphPad Prism software version 6. Results 3.1.12/15-LOX inhibition decreases tissue damage by suppressing lipid peroxidation, infarct size, and production of pro-inflammatory cytokines The study investigated the effects of 12/15-LOX inhibition with ML351 on tissue damage and inflammation in an ischemia/reperfusion (I/R) mouse model.The experimental design of the study is summarized in Figure 1A.To examine the tissue damage induced by ischemia/recanalization, infarct volumes were determined with Nissl staining.12/15-LOX inhibition significantly reduced infarct volume at 6, 24, and 72 h of I/R (p ≤ 0.05; from 77.8 ± to 38.0 ± mm 3 ; p = 0.021, from 68.4 ± to 37.0 ± mm 3 ; p ≤ 0.050, from 74.2 ± to 46.0 ± mm 3 , n = 3 mice/group, respectively) (Figure 1B).Neurological deficit score results were in parallel with infarct volume reduction and showed that 12/15-LOX inhibition significantly improved neurological deficit at all time points (p = 0.042 for 6-h; p = 0.017 for 24-h; p = 0.0008 for 72-h) (Figure 1C).Next, an MDA assay to determine lipid peroxidation following the acute and subacute phases of I/R was performed.MDA assay results showed that increased lipid peroxidation activity was significantly attenuated in ML351-treated groups at all time points (p = 0.0004, 6-h; p = 0.0196, 24-h; p = 0.0414, 72-h) (Figure 1D).Infarct volume, neurological deficit score, and MDA analysis data were in line with the previous findings (Yigitkanli et al., 2013;Karatas et al., 2018). The oxidative degradation of membrane lipids following ischemic conditions leads to the production of pro-inflammatory cytokines and results in an imbalance between anti-and pro-inflammatory cytokines (Perini et al., 2001).Based on this information, we studied pro-inflammatory (IL-6, TNF-alpha, and IL-1beta) and antiinflammatory (IL-10 and TGF-beta) cytokines with ELISA to clarify whether there were any corresponding changes between them during different stages of ischemia.ELISA results obtained from infarct and peri-infarct areas revealed that pro-inflammatory cytokines IL-6 and TNF-alpha parallelly and significantly increased at 6 or 24-h of I/R, which were suppressed by ML351 treatment (p < 0.0001; p = 0.0004) (Figures 1E,F).IL-1beta is the most prominent pro-inflammatory cytokine in the inflammasome pathway, which is activated by inflammasome protein complexes.It is significantly increased at 6-h (p = 0.0002) and 24-h (p < 0.0001) of I/R and decreased by ML351 treatment in this study (p = 0.03 at 6-h and p < 0.001 at 24-h) (Figure 1G).The homeostatic status of tissue is characterized by a balance between pro-and anti-inflammatory cytokines.It is known that this balance is disrupted after a stroke (Perini et al., 2001).Hence, we next analyzed the anti-inflammatory cytokines, IL-10, and TGF-beta with ELISA.IL-10 results displayed that IL-10 increased at 24-h and 72-h in ML351-treated groups which were more pronounced at 24-h (p = 0.0069 for 24-h, p = 0.0192 for 72-h) (Figure 1H).One of the most prominent anti-inflammatory cytokines, TGF-beta was remarkably decreased compared to naive brains in DMSO-administered I/R groups at all time points; however, ML-351 treatment significantly increased TGF-beta levels only at 72 h of I/R (p = 0.0002) (Figure 1I). These findings suggest that ML351 treatment plays a crucial role in maintaining the balance between pro-and anti-inflammatory cytokines, which may contribute to its protective effects in reducing tissue damage and neuroinflammation.The prominent increases in IL-10 and TGF-beta levels at 24-h or 72-h respectively, and the significant increase in TGF-beta levels at 72-h provide suggestive evidence that ML351 may have a delayed but potent effect on promoting anti-inflammatory responses.To further explore I/R-induced neuroinflammation, we investigated the involvement of inflammasomes in ischemia-induced tissue damage. NLRP1 inflammasome is decreased via ML351 administration in the acute and subacute phases of I/R Inflammasomes act as sensors that detect tissue damage, including cerebral ischemia.In this study, we examined the effects of 12/15-LOX inhibition with ML351 on NLRP1 and NLRP3 at all time points following I/R.The results showed that NLRP1 inflammasome labeling was increased in the acute and subacute phases (6, 24, and 72 h) after I/R in the DMSO-treated groups compared to naive brains.However, ML351 treatment significantly decreased NLRP1 inflammasome activation at all time points (at 6-h, p = 0.002; at 24-h, p < 0.0001, and at 72-h, p = 0.0013).The staining character of NLRP1 was mostly B, white arrow).We performed double staining to show the NLRP1-positive cell types and demonstrated that NLRP1 was mostly colocalized with the neuronal marker (NeuN) (Figure 2C).This suggested that neuronal NLRP1 inflammasome activation contributes to the inflammatory response following cerebral ischemia.Additionally, the decrease in NLRP1 immunoreactivity with ML351 treatment indicated that 12/15-LOX inhibition may have a protective effect against inflammasome-mediated inflammation in this model. ML351 treatment attenuates NLRP3 positive cells after acute and subacute phases of I/R NLRP3 immune signaling increased with I/R at three-time points, prominently 6-h and 72-h, and was significantly diminished by ML351 administration (at 6-h, p = 0.0003; 24-h, p = 0.0027; 72-h, p < 0.0001) (Figures 3A,B).These results demonstrate that NLRP3 plays a crucial role in the inflammatory response following I/R, and the complex activation kinetics of NLRP3 should be investigated in further studies.NLRP3 inhibition by ML351 could potentially be a therapeutic strategy for reducing inflammation.Furthermore, the significant decrease in NLRP3 signaling at 6-h and 24-h with ML351 highlights the importance of early intervention in mitigating neuronal inflammasome activation.To examine the cellular source of NLRP3, double staining was performed with NLRP3 and the neuronal cell marker, NeuN, which showed that NeuN-labeled neurons were immunopositive for NLRP3 (Figure 3C).Interestingly, NLRP3 presence was determined in non-neuronal cells at 72-h of I/R (Figure 3C).This change in cellular source suggests that NLRP3 may play a role in both neuronal and non-neuronal cells during the progression of I/R.Further investigation is needed to determine the specific functions of NLRP3 in these different cell types and its implications for stroke pathology. Interestingly, the study demonstrated that NLRP3 activation was more pronounced than NLRP1 activation according to the percentage of cell counting results of DMSO-treated groups (Figures 2B, 3B).The ratio of NLRP3 positive cells to all cells (Hoechst positive) was approximately 6.44 times higher in DMSO-treated groups compared to naive brain tissue, while the ratio of NLRP1 positive cells to all cells was only approximately 1.42 times higher (p < 0.0001) (Figure 3D).This significant increase in NLRP3-positive cells indicates that NLRP3 Immunofluorescence analysis of NLRP1 inflammasome at 6-h, 24-h, or 72-h following I/R.(A) NLRP1 inflammasome protein was detected with immunofluorescence staining (n = 5 mice/group).Images of Hoechst-33258-labeled cell nuclei (blue) were overlapped with the images of NLRP1 (40X, scale bar = 25 μm).NLRP1 inflammasome complex, whose expression was increased in the cell cytoplasm 6, 24, or 72-h after I/R, was decreased by 12/15-LOX inhibition (ML351).In the ML351 treated group at 24-h after I/R, NLRP1 appears to be expressed in both the soma and its axonal extension, which may indicate that it is neuronal (White arrow).inflammasome activation is more robust and widespread in response to ischemic injury.Hence, we proceeded to study the co-localization of the NLRP3 inflammasome with 12/15-LOX staining.NLRP3 was positive in cells expressing 12/15-LOX both in the acute and subacute phases of ischemia-recanalization (Figure 4A).Through the administration of the 12/15-LOX inhibitor ML351, the expression of NLRP3 and 12/15-LOX dramatically decreased (Figure 4A).Our data showed that ML351, an inhibitor of 12/15-LOX, suppressed neuroinflammation by inhibiting the NLRP3 inflammasome at 6-h, 24-h, and 72-h following I/R.When an inflammasome complex assembles, caspase-1 is activated via the cleavage of procaspase-1.We showed that caspase-1 was increased after I/R (Figure 4B), which was compatible with ELISA results of proinflammatory cytokines, IL-1beta, IL-6, and TNF-alpha.The caspase-1 increase was mostly prominent at 24-h and 72-h of I/R groups, which were dramatically suppressed with the administration of ML351.Formerly, we performed double immunostaining of caspase-1 and NLRP3 inflammasome.All the NLRP3-positive cells expressed caspase-1 with 24-h of I/R was the greatest signal increase observed (Figure 4C).As expected, NLRP3 and caspase-1 expression were suppressed with the administration of ML351 (Figure 4C).These results suggested that the increased expression of caspase-1 in the I/R groups was closely associated with the activation of the NLRP3 inflammasome, which was effectively inhibited by the administration of the 12/15-LOX inhibitor.Altogether, these findings highlight the potential therapeutic benefits of targeting the 12/15-LOX pathway to reduce neuroinflammation via the inflammasome pathway and improve outcomes in ischemic stroke which were summarized in the graphical abstract figure. In conclusion, the findings of this study suggest that ML351, a 12/15-LOX inhibitor, exerts protective effects in an I/R mouse model by reducing tissue damage, suppressing lipid peroxidation, and modulating the production of pro-and anti-inflammatory cytokines.These effects are, at least in part, mediated by the inhibition of NLRP1 and NLRP3 inflammasome activation, which may play a crucial role in the inflammatory response following cerebral ischemia.Targeting Discussion Cerebral I/R causes tissue damage via triggering lipid peroxidation as well as neuroinflammation (Haeggstrom et al., 2010).Several studies indicate that there is an increase in the amount of free arachidonic acid and the lipoxygenase enzyme which uses arachidonic acid as a substrate, also increases in ischemic brain tissue (Tong et al., 2002).Accordingly, the widespread expression of 12/15-LOX protein in both neurons and endothelial cells surrounding the infarct area has been shown in the literature (Jin et al., 2008).Furthermore, it has been shown that 12/15-LOX enzyme activity increased in the ischemic mouse brain (Jin et al., 2008).A major effect of the 12/15-LOX enzyme on ischemic brain injury is its direct effect on lipid peroxidation, which leads to delayed cell death, blood-brain barrier damage (BBB), and edema formation in the penumbra (Jin et al., 2008;Gelderblom et al., 2009).That is why, several 12/15-LOX inhibitors were synthesized for the targeting of the stroke (Sailer et al., 1998;Whitman et al., 2002;Deschamps et al., 2006;Kenyon et al., 2006;Rai et al., 2010).However, since most of these inhibitors are not selective, they could not reach the desired efficiency.Recently, van Leyen et al. developed a new compound named ML351 for its possible use in advanced biological models due to its superior properties such as solubility, microsomal stability, and crossing the BBB (Rai et al., 2014).Our study showed a decrease in infarct volume, neurological deficit score, and lipid peroxidation by administration of a more specific 12/15-LOX inhibitor, ML351, at the acute and subacute phases of ischemic stroke in mice which was compatible with the previous studies using different 12/15-LOX inhibitors in mice ischemic stroke (Jin et al., 2008;Yigitkanli et al., 2013;Liu X. et al., 2017;Karatas et al., 2018).This inhibitor may be regarded as an adjuvant therapeutic option with rtPA and other therapeutic strategies in cerebral ischemia (Karatas et al., 2018). Here we show that 12/15-LOX inhibitor treatment effectively suppressed the activation of the inflammasome pathway, specifically the NLRP1 and NLRP3 complexes.I/R caused an increase in pro-inflammatory cytokines (IL-1beta, IL-6, TNF-alpha) at 6-h or 24-h of I/R, but not in the subacute (72-h) phases of stroke.12/15-LOX inhibition significantly decreased IL-1beta, IL-6, and TNF-alpha protein levels at 24-h of stroke.While the early phase is characterized by an acute inflammatory response, the subacute phase is marked by a shift towards a more reparative and anti-inflammatory environment (Ran et al., 2021).Accordingly, anti-inflammatory cytokine (TGF-beta and IL-10) results displayed that TGF-beta was decreased after the stroke at all time points and only recovered at 72-h with 12/15-LOX inhibitor.IL-10 levels were increased prominently at 24-h of stroke with 12/15-LOX inhibition.These results suggest that the antiinflammatory reaction of 12/15-LOX inhibition is mediated by IL-10 NLRP3/12/15-LOX, Caspase-1, and NLRP3/Caspase-1 immunolabelling at 6-h, 24-h, or 72-h following I/R.Nuclei were stained with Hoechst-33258 (blue) (40X, scale bar = 25 μm).(A) Representative images of NLRP3 (green), 12/15-LOX (red), and Hoechst-33258 (blue) were overlapped in the third panel (n = 3).Double labeling of NLRP3 and 12/15-LOX showed that there was a strong colocalization that acted in parallel and decreased in ML351treated brains at all time points of I/R.(B) Caspase-1, a downstream effector of inflammasome activation, staining indicated that caspase-1 is increased mainly at 24-h and 72-h of I/R.(C) NLRP3 and caspase-1 double labeling was performed at 24-h of I/R brains, as prominent caspase-1 activation was observed at this time point, which revealed that caspase-1 positive cells were also positive for NLRP3. 10.3389/fncel.2023.1277268Frontiers in Cellular Neuroscience 09 frontiersin.orgresponse at the acute phase while TGF-beta responds at the subacute phase of ischemia.This highlights the potential therapeutic benefits of ML351 in reducing neuroinflammation following I/R and the timing of targeted inflammatory reaction has utmost importance.12/15-LOX enzyme causes peroxidation of membrane lipids under pathologic conditions such as ischemia.Oxidized lipids induce inflammation by activating NLRP3 (Liang et al., 2021).As a result, inhibition of the 12/15-LOX enzyme may play an important role in suppressing inflammation via the inflammasome pathway.The inflammation process that causes neuron and glial cell death in cerebral ischemia is mediated by protein complexes called inflammasomes.Inflammasomes are cytosolic multiprotein complexes and act as sensors to detect tissue damage.After a stimulus, the ASC protein, which is found in the nucleus under physiological conditions, is translocated into the cytoplasm.It oligomerizes in the cytoplasm, acting as a bridge between NLRP and procaspase-1, and assembles the complex (NLRP1 or NLRP3).It is known that activation of the inflammasomes causes the cleavage and release of pro-inflammatory cytokines (IL-1beta, IL-6) or alarmin (HMGB1) group proteins in the cell.Nowadays, it has been revealed that inflammasomes will be a new therapeutic target for ischemic stroke (Karatas et al., 2018;Alishahi et al., 2019).In the literature, new compounds were synthesized or employed to analyze the inhibitory effects of the inflammasome.Some of them are micro RNAs (miR223) (Bauernfeind et al., 2012), small molecules (MCC950) (Coll et al., 2015), nitric oxide (Mao et al., 2013), type I interferon (Guarda et al., 2011), nuclear factor erythroid-2 related factor 2 (Nrf2) (Liu X., et al., 2017), and polyphenolic compounds like curcumin (Ma et al., 2014;Ishrat et al., 2015;Chen P.Y. et al., 2023).However, the role of the inflammasomes in the inflammation for recovery of stroke has not been clarified.A study investigated the increased expression of NLRP1 immediately following stroke which was maintained at 12, 24, and 72-h (Fann et al., 2013).In our study, the NLRP1 immune signaling started to increase at 6-h of ischemia, and this increase was continued at 24-h and 72-h after I/R.The greatest induction of NLRP1 was observed at 24-h of the ischemia.This result is correlated with our ELISA results of pro-inflammatory cytokine levels, which were responsive mostly at 24-h following I/R.Interestingly, NLRP3 immunoreactivity dominantly increased in our stroke model compared to the NLRP1 inflammasome.The staining of both inflammasomes increased with I/R, which was significantly suppressed by 12/15-LOX inhibition.It is known that inflammasome activation includes procaspase-1 cleavage; accordingly, we showed that NLRP3-positive cells express caspase-1 supporting the formation of the inflammasome complex.In accordance with our results, it was reported that NLRP3 inflammasome-induced inflammation was reduced by lipoxygenase inhibition with cepharanthine at 24-h of cerebral I/R (Zhao et al., 2020).On the other hand, recent study findings were incompatible with the previous literature.They report that ischemic brain injury was not reduced by specific inhibition of NLRP3 with MCC950 or NLRP3 −/− in mice (Lemarchand et al., 2019).These conflicting results clearly show that stroke pathophysiology has a complex nature, and targeting only one pathway may not be sufficient to decrease the ischemic damage.In addition to its anti-inflammatory properties, 12/15-LOX has been implicated in other pathophysiological processes in stroke, such as oxidative stress and neuronal apoptosis (Liu Y. et al., 2017;Karatas and Cakir-Aktas, 2019;Kursun et al., 2022).That is why our study highlights the importance of considering multiple pathways and their interactions in the development of stroke treatment.This research also provides valuable insights into the complex nature of inflammatory responses and offers potential avenues for future research and treatment options. Furthermore, in our study, the cellular source of the inflammasome complex showed that NLRP1 and NLRP3 were expressed in neurons.This finding corroborates the recent literature suggesting the importance of neuronal inflammasome activation in stroke pathophysiology, including Gong et al. 's study, which indicated that the NLRP3 inflammasome expressed by microglia initially was followed by microvascular endothelial cells and neurons, but principally in neurons 24-h after cerebral I/R (Gong et al., 2018).Others have also shown that NLRP3 expression increases specifically in neurons after 4-h, 8-h, and 24-h following transient MCAo (Franke et al., 2021).In our study, we found that NLRP3 expression was shifted from neurons to non-neuronal cells at 72-h of I/R.Therefore, the cellular source of NLRP3 may vary significantly according to distinct pathologies, such as migraine, permanent or transient cerebral ischemia, as well as the process of stroke-induced brain damage.Interestingly, we showed that increased NLRP3 signaling was present in 12/15-LOX-positive cells, which were suppressed by ML351, which indicated a direct interaction of the inflammasome and lipoxygenase pathways.It is known that 12/15-LOX activity leads to neuronal death.Recently, an increasing number of studies have reported that the activation of inflammation-related signaling pathways is closely connected with ferroptosis (Chen Y. et al., 2023).Ferroptosis is an iron-dependent regulated cell death driven by excessive lipid peroxidation.Studies have indicated that increased lipid peroxidation can activate multiple inflammatory pathways including inflammasomes, while proinflammatory cytokines, in turn, aggravate intracellular oxidative stress and excessive lipid peroxidation.12/15-LOX initiates the peroxidation of phospholipids not only in the plasma membrane but also within the mitochondrial membrane, leading to the formation of membrane defects.This facilitates the translocation of mitochondrial DNA (mtDNA) from the matrix to the cytoplasm.This extramitochondrial milieu exposes mtDNA to excessive oxidative stress.The outcome of the oxidative modification of mtDNA enhances its interaction with NLRP3, thereby stimulating the assembly of the NLRP3 inflammasome protein complex which eventually may lead to neuronal cell death (van Leyen et al., 2006;Qiu et al., 2022).Further studies focusing on the crosstalk between lipid peroxidation, inflammasome activation, ferroptosis, and other cell death pathways may shed light on the pathophysiological mechanism of each process and might provide novel therapeutic targets for relevant diseases.Overall, our findings suggest that targeting NLRP1 or NLRP3 inflammasome activation by 12/15-LOX inhibition may be a promising approach for reducing inflammation and improving outcomes in I/R injury. Additionally, this study emphasizes the significance of understanding the cellular sources and expression patterns of key inflammatory molecules at different stages of stroke progression.By identifying these factors, we may have a better understanding of the underlying mechanisms driving stroke pathophysiology.Opposite studies may be related to the complex and dynamic nature of the inflammatory response in stroke, which involves multiple cell types and signaling pathways.Further research is needed to elucidate the precise mechanism underlying the effects of 12/15-LOX inhibition on post-stroke inflammation and to explore its potential as a therapeutic 10.3389/fncel.2023.1277268Frontiers in Cellular Neuroscience 10 frontiersin.orgtarget for stroke.Therefore, a comprehensive understanding of its role in stroke pathogenesis is crucial for developing effective interventions. Limitations of our study include the low number of mice used in some experiments.To overcome this limitation, we tried to increase the accuracy of the results by obtaining at least 3 images from consecutive periinfarct areas and repeating ELISA studies 2-3 times.Yet we obtained statistically significant results.Another one is studying only brain tissue for ELISA analysis; serum samples would be valuable to compare brain tissue results with circulating cytokines.However, serum levels of inflammatory cytokines may not represent brain inflammation accurately as there may be other systemic confounding factors. Our results provide new insights into the role of 12/15-LOX in inflammasome signaling and neuronal damage.The activation of the NLRP3 inflammasome in 12/15-LOX-positive cells may contribute to the inflammatory response and neural damage observed in stroke.Targeting this pathway may represent a novel therapeutic strategy for stroke prevention and treatment.In addition, our findings highlight the need for further investigation into the mechanisms underlying inflammasome activation in neurons and their contribution to neurological disorders.Overall, our study sheds light on the complex interplay between inflammation, oxidative stress, and neuronal damage in stroke pathophysiology, and opens new avenues for future research.Therefore, further research is needed to fully understand the role of 12/15-LOX in stroke-induced neuroinflammation.In addition, the development of specific inhibitors targeting 12/15-LOX may provide a potential therapeutic strategy for stroke patients.It is also important to note that neuroinflammation, oxidative stress, and lipid peroxidation are not only involved in stroke but also in other neurological disorders such as Alzheimer's disease and Parkinson's disease.Thus, understanding the mechanisms underlying lipid peroxidation and its associated pathways may have broader implications for the treatment of various neurological diseases.Overall, continued research on lipid peroxidation and its effects on neuroinflammation will contribute to a better understanding of the pathophysiology of stroke and other neurological disorders, ultimately leading to improved treatments and outcomes for patients. Conclusion In summary, these results indicate that one of the possible mechanisms of 12/15-LOX inhibition involves the suppression of neuroinflammation in both the acute and subacute phases of cerebral ischemia/recanalization by suppressing inflammasomes and inflammasome-related proteins.These findings also suggest that 12/15-LOX inhibitors may be a treatment option for stroke therapy or other diseases characterized by neuroinflammation. FIGURE 3 FIGURE 3 Immunofluorescence analysis of NLRP3 inflammasome at 6-h, 24-h, or 72-h following I/R.(A) Immunofluorescence staining of NLRP3.Nuclei were stained with Hoechst.Representative images of NLRP3 (red) and Hoechst-33258 (blue) were overlapped (40X, scale bar = 25 μm).Cytoplasmic NLRP3 labeling was increased in the DMSO groups of the acute and subacute phases of I/R.By the administration of ML351, labeling and signal intensity of NLRP3 were dramatically decreased at all time points following I/R.(B) Graphical representation of the changes in cell numbers with positive NLRP3 immunoreactivity at 6, 24, and 72-h.NLRP3 positive cell numbers were increased in both the acute (6 and 24-h) and subacute (72-h) phases of I/R.The decreases in NLRP3 immunoreactivity at 6-h (***p = 0.0003), 24-h (**p = 0.0027) and 72-h (****p < 0.0001) with ML351 treatment were statistically significant (n = 3 mice/group).(C) Double immunolabeling of NLRP3 (red) and neurons (NeuN; neuronal marker) (green) showed that NLRP3 is colocalized with NeuN (white arrows) signal especially at 6-h and 24-h after I/R.At 72-h of I/R NLRP3 immunoreactivity was present both in neurons (white arrows) and non-neuronal cells (black arrows).(D) Comparison of NLRP3 and NLRP1 staining in I/R-DMSO groups normalized to naive brain tissue.NLRP3 immunostaining was significantly increased compared to NLRP1 immunoreactivity (****p < 0.0001).
2023-09-29T15:11:00.506Z
2023-09-26T00:00:00.000
{ "year": 2023, "sha1": "4a7359d658607e229ee0e310dc618867b3e8d7f0", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fncel.2023.1277268/pdf?isPublishedV2=False", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c0233e3522fbe1a9ee657002e0d5d4e704cbbc3b", "s2fieldsofstudy": [ "Biology", "Medicine", "Chemistry" ], "extfieldsofstudy": [] }
247583784
pes2o/s2orc
v3-fos-license
Clinical Outcomes Among Working Adults Using the Health Integrator Smartphone App: Analyses of Prespecified Secondary Outcomes in a Randomized Controlled Trial Background There is a need to find new methods that can enhance the individuals’ engagement in self-care and increase compliance to a healthy lifestyle for the prevention of noncommunicable diseases and improved quality of life. Mobile health (mHealth) apps could provide large-scale, cost-efficient digital solutions to implement lifestyle change, which as a corollary may enhance quality of life. Objective Here we evaluate if the use of a smartphone-based self-management system, the Health Integrator app, with or without telephone counseling by a health coach, had an effect on clinical variables (secondary outcomes) of importance for noncommunicable diseases. Methods The study was a 3-armed parallel randomized controlled trial. Participants were randomized to a control group or to 1 of 2 intervention groups using the Health Integrator app with or without additional telephone counseling for 3 months. Clinical variables were assessed before the start of the intervention (baseline) and after 3 months. Due to the nature of the intervention, targeting lifestyle changes, participants were not blinded to their allocation. Robust linear regression with complete case analysis was performed to study the intervention effect among the intervention groups, both in the entire sample and stratifying by type of work (office worker vs bus driver) and sex. Results Complete data at baseline and follow-up were obtained from 205 and 191 participants, respectively. The mean age of participants was 48.3 (SD 10) years; 61.5% (126/205) were men and 52.2% (107/205) were bus drivers. Improvements were observed at follow-up among participants in the intervention arms. There was a small statistically significant effect on waist circumference (β=–0.97, 95% CI –1.84 to –0.10) in the group receiving the app and additional coach support compared to the control group, but no other statistically significant differences were seen. However, participants receiving only the app had statistically significantly lower BMI (β=–0.35, 95% CI –0.61 to –0.09), body weight (β=–1.08, 95% CI –1.92 to –0.26), waist circumference (β=–1.35, 95% CI –2.24 to –0.45), and body fat percentage (β=–0.83, 95% CI –1.65 to –0.02) at follow-up compared to the controls. There was a statistically significant difference in systolic blood pressure between the two intervention groups at follow-up (β=–3.74, 95% CI –7.32 to –0.16); no other statistically significant differences in outcome variables were seen. Conclusions Participants randomized to use the Health Integrator smartphone app showed small but statistically significant differences in body weight, BMI, waist circumference, and body fat percentage compared to controls after a 3-month intervention. The effect of additional coaching together with use of the app is unclear. Trial Registration ClinicalTrials.gov NCT03579342; https://clinicaltrials.gov/ct2/show/NCT03579342 International Registered Report Identifier (IRRID) RR2-10.1186/s12889-019-6595-6 yes: all primary outcomes were significantly better in intervention group vs control partly: SOME primary outcomes were significantly better in intervention group vs control no statistically significant difference between control and intervention potentially harmful: control was significantly better than intervention in one or more outcomes inconclusive: more research is needed Other: Approx. Percentage of Users (starters) still using the app as recommended after 3 months * Overall, was the app/intervention effective? * Is this a full powered effectiveness trial or a pilot/feasibility trial? * Manuscript tracking number * If this is a JMIR submission, please provide the manuscript tracking number under "other" (The ms tracking number can be found in the submission acknowledgement email, or when you login as author in JMIR. If the paper is already published in JMIR, then the ms tracking number is the four-digit number at the end of the DOI, to be found at the bottom of each published article in JMIR) 1a) Does your paper address CONSORT item 1a? * Identify the mode of delivery. Preferably use "web-based" and/or "mobile" and/or "electronic game" in the title. Avoid ambiguous terms like "online", "virtual", "interactive". Use "Internet-based" only if Intervention includes non-web-based Internet components (e.g. email), use "computer-based" or "electronic" only if offline products are used. Use "virtual" only in the context of "virtual reality" (3-D worlds). Use "online" only in the context of "online support groups". Complement or substitute product names with broader terms for the class of products (such as "mobile" or "smart phone" instead of "iphone"), especially if the application runs on different platforms. Does your paper address subitem 1a-i? * Copy and paste relevant sections from manuscript title (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "the Health Integrator smartphone application" 1a-ii) Non-web-based components or important co-interventions in title Mention non-web-based components or important co-interventions in title, if any (e.g., "with telephone support"). Does your paper address subitem 1a-ii? Copy and paste relevant sections from manuscript title (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study 10/2/2020 CONSORT-EHEALTH (V 1.6.1) - subitem not at all important 1 2 3 4 5 essential 1a-iii) Primary condition or target group in the title Mention primary condition or target group in the title, if any (e.g., "for children with Type I Diabetes") Example: A Web-based and Mobile Intervention with Telephone Support for Children with Type I Diabetes: Randomized Controlled Trial Clear selection Does your paper address subitem 1a-iii? * Copy and paste relevant sections from manuscript title (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "working adults" 1b-i) Key features/functionalities/components of the intervention and comparator in the METHODS section of the ABSTRACT Mention key features/functionalities/components of the intervention and comparator in the abstract. If possible, also mention theories and principles used for designing the site. Keep in mind the needs of systematic reviewers and indexers by including important synonyms. (Note: Only report in the abstract what the main paper is reporting. If this information is missing from the main body of text, consider adding it) Clear selection 10/2/2020 CONSORT-EHEALTH (V 1.6.1) -Submission/Publication Form https://docs.google.com/forms/d/e/1FAIpQLSfZBSUp1bwOc_OimqcS64RdfIAFvmrTSkZQL2-3O8O9hrL5Sw/viewform?hl=en_US&formkey=dG… 10/52 subitem not at all important 1 2 3 4 5 essential Does your paper address subitem 1b-i? * Copy and paste relevant sections from the manuscript abstract (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "three-armed parallel randomized controlled trial. Participants were randomized to a control group, or to one of two intervention groups using the Health Integrator application with or without additional telephone counselling during three months." 1b-ii) Level of human involvement in the METHODS section of the ABSTRACT Clarify the level of human involvement in the abstract, e.g., use phrases like "fully automated" vs. "therapist/nurse/care provider/physician-assisted" (mention number and expertise of providers involved, if any). (Note: Only report in the abstract what the main paper is reporting. If this information is missing from the main body of text, consider adding it) Clear selection Does your paper address subitem 1b-ii? Copy and paste relevant sections from the manuscript abstract (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "with or without telephone counselling by a health coach" Clear selection Does your paper address subitem 1b-iii? Copy and paste relevant sections from the manuscript abstract (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Your answer 1b-iv) RESULTS section in abstract must contain use data Report number of participants enrolled/assessed in each group, the use/uptake of the intervention (e.g., attrition/adherence metrics, use over time, number of logins etc.), in addition to primary/secondary outcomes. (Note: Only report in the abstract what the main paper is reporting. If this information is missing from the main body of text, consider adding it) INTRODUCTION 2a) In INTRODUCTION: Scientific background and explanation of rationale Does your paper address subitem 1b-iv? Copy and paste relevant sections from the manuscript abstract (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "Complete data at baseline and follow-up was obtained from 205 and 168 participants, respectively." 1b-v) CONCLUSIONS/DISCUSSION in abstract for negative trials Conclusions/Discussions in abstract for negative trials: Discuss the primary outcome -if the trial is negative (primary outcome not changed), and the intervention was not used, discuss whether negative results are attributable to lack of uptake and discuss reasons. (Note: Only report in the abstract what the main paper is reporting. If this information is missing from the main body of text, consider adding it) Clear selection Does your paper address subitem 1b-v? Copy and paste relevant sections from the manuscript abstract (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Not a negative trial. While health coaches can help to set realistic goals and encourage when motivation fails, they are also less scalable. Results differ, from favoring the smartphone and coach assisted group, [7,9] to showing no signi cant difference between groups. [8] Based on evidence from previous research on smartphone assisted lifestyle interventions, we developed and built a new digital platform for lifestyle change. The platform, called Health Integrator, offers a variety of public, private and community services for behavior change in different domains such as smoking, alcohol, physical activity, diet, stress, and sleep." Does your paper address CONSORT subitem 2b? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "The aim of this study was to evaluate if use of the smartphone based selfmanagement system, the Health Integrator, for three months, with or without telephone counselling by a health coach, had an effect on clinical variables of importance for non-communicable diseases such as body mass index, waist circumference and blood pressure, in o ce workers and bus drivers, respectively." 3a) Description of trial design (such as parallel, factorial) including allocation ratio 3b) Important changes to methods after trial commencement (such as eligibility criteria), with reasons subitem not at all important 1 2 3 4 5 essential Does your paper address CONSORT subitem 3a? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "three-armed parallel randomized controlled trial with allocation 1:1:1, to one of two intervention arms or a control group" Does your paper address CONSORT subitem 3b? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study No changes 3b-i) Bug fixes, Downtimes, Content Changes Bug fixes, Downtimes, Content Changes: ehealth systems are often dynamic systems. A description of changes to methods therefore also includes important changes made on the intervention or comparator during the trial (e.g., major bug fixes or changes in the functionality or content) (5-iii) and other "unexpected events" that may have influenced study design such as staff changes, system failures/downtimes, etc. Clear selection Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Not applicable Does your paper address CONSORT subitem 4a? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "Both men and women were eligible for participation and inclusion criteria were: 18 year of age or older, being able to understand Swedish well enough to understand the study aims and provide informed consent for participation, and having access and ability to use a smartphone." 4a-i) Computer / Internet literacy Computer / Internet literacy is often an implicit "de facto" eligibility criterion -this should be explicitly clarified. Does your paper address subitem 4a-i? Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "having access and ability to use a smartphone" 4a-ii) Open vs. closed, web-based vs. face-to-face assessments: Open vs. closed, web-based vs. face-to-face assessments: Mention how participants were recruited (online vs. offline), e.g., from an open access website or from a clinic, and clarify if this was a purely webbased trial, or there were face-to-face components (as part of the intervention or for assessment), i.e., to what degree got the study team to know the participant. In online-only trials, clarify if participants were quasi-anonymous and whether having multiple identities was possible or whether technical or logistical measures (e.g., cookies, email confirmation, phone calls) were used to detect/prevent these. Does your paper address subitem 4a-ii? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "Study participants were recruited from four companies, including two companies with white-collar employees, i.e. o ce workers, and two companies with blue-collar employees, i.e. bus drivers." ". Employees that ful lled inclusion criteria and were interested in participating were emailed detailed information about the study and a link to the web-based baseline questionnaire." 4a-iii) Information giving during recruitment Information given during recruitment. Specify how participants were briefed for recruitment and in the informed consent procedures (e.g., publish the informed consent documentation as appendix, see also item X26), as this information may have an effect on user self-selection, user expectation and may also bias results. 4b) Settings and locations where the data were collected subitem not at all important 1 2 3 4 5 essential Does your paper address subitem 4a-iii? Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "Eligible participants were required to give their informed consent prior to responding to the baseline questionnaire. After an introductory screen displaying information about the study, participants were required to consent to participate in order to continue to the questionnaire. At the baseline meeting, participants also gave their written informed consent." Does your paper address CONSORT subitem 4b? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "Study participants were recruited from four companies, including two companies with white-collar employees, i.e. o ce workers, and two companies with blue-collar employees, i.e. bus drivers." 4b-i) Report if outcomes were (self-)assessed through online questionnaires Clearly report if outcomes were (self-)assessed through online questionnaires (as common in web-based trials) or otherwise. subitem not at all important 1 2 3 4 5 essential 5) The interventions for each group with sufficient details to allow replication, including how and when they were actually administered Does your paper address subitem 4b-i? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "Weight (kg), waist circumference (cm), body fat percent, and blood pressure was measured by study personnel at baseline and follow-up after 3-months. Height (cm) was self-reported at baseline. BMI (kg/m2) was calculated based on measured weight and reported height. Body weight, waist circumference, and body fat percent was analyzed separately for women and men." 4b-ii) Report how institutional affiliations are displayed Report how institutional affiliations are displayed to potential participants [on ehealth media], as affiliations with prestigious hospitals or universities may affect volunteer rates, use, and reactions with regards to an intervention.(Not a required item -describe only if this may bias results) Does your paper address subitem 4b-ii? Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Not relevant 10/2/2020 CONSORT-EHEALTH (V 1.6.1) - Clear selection Does your paper address subitem 5-i? Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Your answer 5-ii) Describe the history/development process Describe the history/development process of the application and previous formative evaluations (e.g., focus groups, usability testing), as these will have an impact on adoption/use rates and help with interpreting results. Clear selection Does your paper address subitem 5-ii? Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Not relevant for this manuscript 5-iii) Revisions and updating Revisions and updating. Clearly mention the date and/or version number of the application/intervention (and comparator, if applicable) evaluated, or describe whether the intervention underwent major changes during the evaluation process, or whether the development and/or content was "frozen" during the trial. Describe dynamic components such as news feeds or changing content which may have an impact on the replicability of the intervention (for unexpected events see item 3b). Clear selection Does your paper address subitem 5-iii? Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Not relevant for this manuscript 5-iv) Quality assurance methods Provide information on quality assurance methods to ensure accuracy and quality of information provided [1], if applicable. Clear selection Does your paper address subitem 5-iv? Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Not relevant for this manuscript -v) Ensure replicability by publishing the source code, and/or providing screenshots/screen-capture video, and/or providing flowcharts of the algorithms used Ensure replicability by publishing the source code, and/or providing screenshots/screen-capture video, and/or providing flowcharts of the algorithms used. Replicability (i.e., other researchers should in principle be able to replicate the study) is a hallmark of scientific reporting. Clear selection Does your paper address subitem 5-v? Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Not relevant for this manuscript 5-vi) Digital preservation Digital preservation: Provide the URL of the application, but as the intervention is likely to change or disappear over the course of the years; also make sure the intervention is archived (Internet Archive, webcitation.org, and/or publishing the source code or screenshots/videos alongside the article). As pages behind login screens cannot be archived, consider creating demo pages which are accessible without login. Clear selection 10/2/2020 CONSORT-EHEALTH (V 1. Does your paper address subitem 5-vi? Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Not relevant for this manuscript 5-vii) Access Access: Describe how participants accessed the application, in what setting/context, if they had to pay (or were paid) or not, whether they had to be a member of specific group. If known, describe how participants obtained "access to the platform and Internet" [1]. To ensure access for editors/reviewers/readers, consider to provide a "backdoor" login account or demo mode for reviewers/readers to explore the application (also important for archiving purposes, see vi). Does your paper address subitem 5-vii? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "During the baseline meeting, participants that had been randomized to one of the two intervention groups downloaded the Health Integrator smartphone application." 10/2/2020 CONSORT-EHEALTH (V 1. Does your paper address subitem 5-viii? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "During the active intervention, participants recorded if the weekly goal was met using an emoticon scale, i.e. smiley faces, or by marking the number of days that the goal was met during the week. A reminder to record progression was sent out every Sunday at 9.20 pm. Within the Health Integrator system, a number of different offers related to the different intervention areas were also available. For example, a participant aiming to increase physical activity levels could choose between offers including for example other smartphone applications developed to promote physical activity speci cally, e.g. Runkeeper, or receive a training pass at a local gym. The offers were free of charge for the participant." Additional details are found in the published study protocol. 5-ix) Describe use parameters Describe use parameters (e.g., intended "doses" and optimal timing for use). Clarify what instructions or recommendations were given to the user, e.g., regarding timing, frequency, heaviness of use, if any, or was the intervention used ad libitum. Does your paper address subitem 5-ix? Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Not relevant for this manuscript 5-x) Clarify the level of human involvement Clarify the level of human involvement (care providers or health professionals, also technical assistance) in the e-intervention or as co-intervention (detail number and expertise of professionals involved, if any, as well as "type of assistance offered, the timing and frequency of the support, how it is initiated, and the medium by which the assistance is delivered". It may be necessary to distinguish between the level of human involvement required for the trial, and the level of human involvement required for a routine application outside of a RCT setting (discuss under item 21 -generalizability). Does your paper address subitem 5-x? Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "Employees that ful lled inclusion criteria and were interested in participating were emailed detailed information about the study and a link to the web-based baseline questionnaire." "After having completed the baseline questionnaire, the respondent was provided with a link to the Health Integrator system. He or she was asked to answer additional questions creating a health pro le in the system and could thereafter schedule a time for the baseline meeting with the health coach." "Study participants meet with study personnel at baseline and after 3-months of follow-up." "Results from the health pro le were discussed with the health coach at the baseline meeting." 5-xi) Report any prompts/reminders used Report any prompts/reminders used: Clarify if there were prompts (letters, emails, phone calls, SMS) to use the application, what triggered them, frequency etc. It may be necessary to distinguish between the level of prompts/reminders required for the trial, and the level of prompts/reminders for a routine application outside of a RCT setting (discuss under item 21 -generalizability). Does your paper address subitem 5-xi? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "During the active intervention, participants recorded if the weekly goal was met using an emoticon scale, i.e. smiley faces, or by marking the number of days that the goal was met during the week. A reminder to record progression was sent out every Sunday at 9.20 pm." Does your paper address subitem 5-xii? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "Participants were randomized to one of three groups: Intervention group A receiving the Health Integrator Smartphone application and additional coach support, Intervention group B receiving the Health Integrator Smartphone application without additional coach support, or Control group C that did not receive the Health Integrator smartphone application or any coach support." Does your paper address CONSORT subitem 6a? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "Weight (kg), waist circumference (cm), body fat percent, and blood pressure was measured by study personnel at baseline and follow-up after 3-months. Height (cm) was self-reported at baseline. BMI (kg/m2) was calculated based on measured weight and reported height. Body weight, waist circumference, and body fat percent was analyzed separately for women and men." subitem not at all important 1 2 3 4 5 essential 6b) Any changes to trial outcomes after the trial commenced, with reasons 7a) How sample size was determined NPT: When applicable, details of whether and how the clustering by care provides or centers was addressed 6a-iii) Describe whether, how, and when qualitative feedback from participants was obtained Describe whether, how, and when qualitative feedback from participants was obtained (e.g., through emails, feedback forms, interviews, focus groups). Clear selection Does your paper address subitem 6a-iii? Copy and paste relevant sections from manuscript text Not applicable for this manuscript. Evaluated in separate publication published in JMIR Mhealth Uhealth. Does your paper address CONSORT subitem 6b? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study No changes in trial outcomes were made after the trial commenced. subitem not at all important 1 2 3 4 5 essential 7b) When applicable, explanation of any interim analyses and stopping guidelines 8a) Method used to generate the random allocation sequence NPT: When applicable, how care providers were allocated to each trial group 7a-i) Describe whether and how expected attrition was taken into account when calculating the sample size Describe whether and how expected attrition was taken into account when calculating the sample size. Clear selection Does your paper address subitem 7a-i? Copy and paste relevant sections from manuscript title (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Details published in study protocol. Does your paper address CONSORT subitem 7b? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Not applicable 8b) Type of randomisation; details of any restriction (such as blocking and block size) 9) Mechanism used to implement the random allocation sequence (such as sequentially numbered containers), describing any steps taken to conceal the sequence until interventions were assigned 10) Who generated the random allocation sequence, who enrolled participants, and who assigned participants to interventions Does your paper address CONSORT subitem 8a? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Details published in study protocol. Does your paper address CONSORT subitem 8b? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "three-armed parallel randomized controlled trial with allocation 1:1:1, to one of two intervention arms or a control group" "Randomization was done in blocks of six by type of company and gender, using a random allocation list." Further details published in study protocol. Does your paper address CONSORT subitem 9? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Details published in study protocol. 11a) If done, who was blinded after assignment to interventions (for example, participants, care providers, those assessing outcomes) and how NPT: Whether or not administering co-interventions were blinded to group assignment subitem not at all important 1 2 3 4 5 essential Does your paper address CONSORT subitem 10? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "Participants were randomized by the health coaches to one of three groups:..." "Participants were informed about their allocation when meeting with study personnel at baseline measurements." 11a-i) Specify who was blinded, and who wasn't Specify who was blinded, and who wasn't. Usually, in web-based trials it is not possible to blind the participants [1, 3] (this should be clearly acknowledged), but it may be possible to blind outcome assessors, those doing data analysis or those administering co-interventions (if any). Clear selection Does your paper address subitem 11a-i? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Not applicable in this study setting. 12a) Statistical methods used to compare groups for primary and secondary outcomes NPT: When applicable, details of whether and how the clustering by care providers or centers was addressed 11a-ii) Discuss e.g., whether participants knew which intervention was the "intervention of interest" and which one was the "comparator" Informed consent procedures (4a-ii) can create biases and certain expectations -discuss e.g., whether participants knew which intervention was the "intervention of interest" and which one was the "comparator". Does your paper address subitem 11a-ii? Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "Participants in the control group received access to the application after the active intervention had been completed at 3-months of follow-up." Does your paper address CONSORT subitem 11b? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Not relevant Does your paper address CONSORT subitem 12a? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "The effect of the intervention between intervention groups and control group (intervention group A vs. control group C; intervention group B vs. control group C; and intervention group A vs. intervention group B) was analyzed using robust regression in order to account for in uential observations (outliers) in data. Further, analysis of within-group comparisons and intervention effects were also performed strati ed by type of work (o ce worker vs. bus driver). Missing data was minimal with 99% of participants having complete data on body weight and BMI, and 93% on blood pressure, waist circumference and body fat percent. Therefore, all statistical analyses are based on complete cases. The statistical signi cance was set at p<0.05." 12a-i) Imputation techniques to deal with attrition / missing values Imputation techniques to deal with attrition / missing values: Not all participants will use the intervention/comparator as intended and attrition is typically high in ehealth trials. Specify how participants who did not use the application or dropped out from the trial were treated in the statistical analysis (a complete case analysis is strongly discouraged, and simple imputation techniques such as LOCF may also be problematic [4]). Clear selection Does your paper address subitem 12a-i? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "Missing data was minimal with 99% of participants having complete data on body weight and BMI, and 93% on blood pressure, waist circumference and body fat percent. Therefore, all statistical analyses are based on complete cases." Does your paper address CONSORT subitem 12b? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "Results are shown for all study participants and by study group (intervention group A, B, and control group C)." Clear selection Does your paper address subitem X26-i? Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "The study was approved by the Regional Ethical Review Board in Stockholm, Does your paper address subitem X26-ii? Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "Eligible participants were required to give their informed consent prior to responding to the baseline questionnaire. After an introductory screen displaying information about the study, participants were required to consent to participate in order to continue to the questionnaire. At the baseline meeting, participants also gave their written informed consent." X26-iii) Safety and security procedures Safety and security procedures, incl. privacy considerations, and any steps taken to reduce the likelihood or detection of harm (e.g., education and training, availability of a hotline) Does your paper address subitem X26-iii? Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Not applicable in this manuscript RESULTS 13a) For each group, the numbers of participants who were randomly assigned, received intended treatment, and were analysed for the primary outcome NPT: The number of care providers or centers performing the intervention in each group and the number of patients treated by each care provider in each center 13b) For each group, losses and exclusions after randomisation, together with reasons Does your paper address CONSORT subitem 13a? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study See table 1. Does your paper address CONSORT subitem 13b? (NOTE: Preferably, this is shown in a CONSORT flow diagram) * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "Among these, four did not complete baseline measurements and were excluded from all analysis, and 168 had complete data from the 3-month follow-up." 13b-i) Attrition diagram Strongly recommended: An attrition diagram (e.g., proportion of participants still logging in or using the intervention/comparator in each group plotted over time, similar to a survival curve) or other figures or tables demonstrating usage/dose/engagement. Does your paper address subitem 13b-i? Copy and paste relevant sections from the manuscript or cite the figure number if applicable (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Data will be published in separate manuscript Does your paper address CONSORT subitem 14a? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "The design of the Health Integrator study has been described in detail previously." Details are published in study protocol. 14a-i) Indicate if critical "secular events" fell into the study period Indicate if critical "secular events" fell into the study period, e.g., significant changes in Internet resources available or "changes in computer hardware or Internet delivery resources" 14b) Why the trial ended or was stopped (early) 15) A table showing baseline demographic and clinical characteristics for each group NPT: When applicable, a description of care providers (case volume, qualification, expertise, etc.) and centers (volume) in each group Does your paper address subitem 14a-i? Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Not applicable Does your paper address CONSORT subitem 14b? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Not applicable, the trial was not stopped early Does your paper address CONSORT subitem 15? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "Baseline characteristics of all subjects divided into the different intervention groups are shown in Table 1 15-i) Report demographics associated with digital divide issues In ehealth trials it is particularly important to report demographics associated with digital divide issues, such as age, education, gender, social-economic status, computer/Internet/ehealth literacy of the participants, if known. Clear selection Does your paper address subitem 15-i? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "In Sweden, this number is 92%, furthermore, ownership and usage is independent of socioeconomic status." 16-i) Report multiple "denominators" and provide definitions Report multiple "denominators" and provide definitions: Report N's (and effect sizes) "across a range of study participation [and use] thresholds" [1], e.g., N exposed, N consented, N used more than x times, N used more than y weeks, N participants "used" the intervention/comparator at specific pre-defined time points of interest (in absolute and relative numbers per group). Always clearly define "use" of the intervention. Does your paper address subitem 16-i? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study See tables 1, 2 and 3 16-ii) Primary analysis should be intent-to-treat Primary analysis should be intent-to-treat, secondary analyses could include comparing only "users", with the appropriate caveats that this is no longer a randomized sample (see 18-i). Does your paper address subitem 16-ii? Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "Missing data was minimal with 99% of participants having complete data on body weight and BMI, and 93% on blood pressure, waist circumference and body fat percent. Therefore, all statistical analyses are based on complete cases." Does your paper address CONSORT subitem 17a? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study See 17a-i) Presentation of process outcomes such as metrics of use and intensity of use In addition to primary/secondary (clinical) outcomes, the presentation of process outcomes such as metrics of use and intensity of use (dose, exposure) and their operational definitions is critical. This does not only refer to metrics of attrition (13-b) (often a binary variable), but also to more continuous exposure metrics such as "average session length". These must be accompanied by a technical description how a metric like a "session" is defined (e.g., timeout after idle time) [1] (report under item 6a). Does your paper address subitem 17a-i? Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Not relevant for this manuscript Does your paper address CONSORT subitem 17b? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study See Does your paper address CONSORT subitem 18? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study See appendix 1 and 2 for strati ed analyses 18-i) Subgroup analysis of comparing only users A subgroup analysis of comparing only users is not uncommon in ehealth trials, but if done, it must be stressed that this is a self-selected sample and no longer an unbiased sample from a randomized trial (see 16-iii). Does your paper address subitem 18-i? Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Not applicable to this manuscript Does your paper address CONSORT subitem 19? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Not applicable for this intervention study Does your paper address subitem 19-ii? Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "User satisfaction with the application was assessed at the end of the trial." 22) Interpretation consistent with results, balancing benefits and harms, and considering other relevant evidence NPT: In addition, take into account the choice of the comparator, lack of or partial blinding, and unequal expertise of care providers or centers in each group subitem not at all important 1 2 3 4 5 essential 22-i) Restate study questions and summarize the answers suggested by the data, starting with primary outcomes and process outcomes (use) Restate study questions and summarize the answers suggested by the data, starting with primary outcomes and process outcomes (use). Clear selection Does your paper address subitem 22-i? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "We found improvements in BMI and waist circumference after 3-months of intervention in the groups receiving the Health Integrator smartphone app with or without additional coach support. In the group receiving additional coach support, systolic blood pressure was also improved at follow-up. However, this was only seen in within-group analysis and not in between group comparisons. Whilst the lowering of blood pressure was more pronounced among o ce workers, the effect on variables relating to body weight seems to be mainly driven by changes in outcomes among bus drivers. Nevertheless, the effect on body weight was present in o ce workers as well." Does your paper address subitem 22-ii? Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "...our application targeting the individual´s speci c need of lifestyle change show promising results with regard to positive changes in body weight, BMI and waist circumference seen in a 3-month intervention. Nonetheless, studies with longer follow-up are needed, to further assess the long-term effect on variables relating to body weight." 20-i) Typical limitations in ehealth trials Typical limitations in ehealth trials: Participants in ehealth trials are rarely blinded. Ehealth trials often look at a multiplicity of outcomes, increasing risk for a Type I error. Discuss biases due to non-use of the intervention/usability issues, biases through informed consent procedures, unexpected events. Does your paper address subitem 20-i? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "controls in our study only received standard care. This may possibly have led to our control subjects feeling disappointed and thereby less motivated to a healthy lifestyle" "A common limitation in intervention studies is drop-out and attrition of study participants. In a previous Swedish study evaluating an internet based weight loss program, only 19%, (4,440 out of 22,860 from start) logged in at least twice during the rst three months and at least twice during the last two months. Nevertheless, compliance in our study was high..." "However, since employment was a prerequisite for participation in our study, this may have created a selection of healthier people than the general population. Nevertheless, this would likely lead to an underestimation of results compared to the actual effect of the intervention." "However, the lack of data on HbA1c and serum lipids, i.e. total cholesterol, Apo-A1 and Apo-B, at follow-up is a limitation. There were few participants with baseline values outside clinical references and following clinical practice, only those with pathological values outside the reference were referred for a second blood sampling at the follow-up assessment." 21-i) Generalizability to other populations Generalizability to other populations: In particular, discuss generalizability to a general Internet population, outside of a RCT setting, and general patient population, including applicability of the study results for other organizations Clear selection 10/2/2020 CONSORT-EHEALTH (V 1.6.1) - 23) Registration number and name of trial registry Does your paper address subitem 21-i? Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "Our study included both men and women, as well as employees from different types of professions, i.e. bus drivers and o ce workers, potentially representing different socioeconomic groups in society. However, since employment was a prerequisite for participation in our study, this may have created a selection of healthier people than the general population. Nevertheless, this would likely lead to an underestimation of results compared to the actual effect of the intervention." 21-ii) Discuss if there were elements in the RCT that would be different in a routine application setting Discuss if there were elements in the RCT that would be different in a routine application setting (e.g., prompts/reminders, more human involvement, training sessions or other co-interventions) and what impact the omission of these elements could have on use, adoption, or outcomes if the intervention is applied outside of a RCT setting. Does your paper address subitem 21-ii? Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study Not applicable 10/2/2020 CONSORT-EHEALTH (V 1.6.1) -Submission/Publication Form https://docs.google.com/forms/d/e/1FAIpQLSfZBSUp1bwOc_OimqcS64RdfIAFvmrTSkZQL2-3O8O9hrL5Sw/viewform?hl=en_US&formkey=dG… 49/52 24) Where the full trial protocol can be accessed, if available 25) Sources of funding and other support (such as supply of drugs), role of funders Does your paper address CONSORT subitem 23? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "ClinicalTrials.gov Identi er: NCT03579342" Does your paper address CONSORT subitem 24? * Cite a Multimedia Appendix, other reference, or copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "The design of the Health Integrator study has been described in detail previously. Does your paper address CONSORT subitem 25? * Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "The Health Integrator was developed as is a part of the European collaboration "The health movement -for patient empowerment". This trial was one of the work packages (work package ID A1803) supported by the European Institute of Innovation and Technology, EIT (grant number 17088)." X27) Conflicts of Interest (not a CONSORT item) subitem not at all important 1 2 3 4 5 essential About the CONSORT EHEALTH checklist yes, major changes yes, minor changes no X27-i) State the relation of the study team towards the system being evaluated In addition to the usual declaration of interests (financial or otherwise), also state the relation of the study team towards the system being evaluated, i.e., state if the authors/evaluators are distinct from or identical with the developers/sponsors of the intervention. Clear selection Does your paper address subitem X27-i? Copy and paste relevant sections from the manuscript (include quotes in quotation marks "like this" to indicate direct quotes from your manuscript), or elaborate on this item by providing additional information not in the ms, or briefly explain why the item is not applicable/relevant for your study "The authors declare no con icts of interest." As a result of using this checklist, did you make changes in your manuscript? * What were the most important changes you made as a result of using this checklist? We added a new reference
2022-03-22T06:22:44.249Z
2022-03-01T00:00:00.000
{ "year": 2022, "sha1": "82e89fe40475ecae6a92b84a179bf46134ca3355", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "176bd5b305c5f740f00552b6d66df5df9753ffa6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
51929825
pes2o/s2orc
v3-fos-license
Scoping Review of Systems to Train Psychomotor Skills in Hearing Impaired Children Objectives: The aim of this work is to provide a scoping review to compile and classify the systems helping train and enhance psychomotor skills in hearing impaired (HI) children. Methods: Based on an exhaustive review on psychomotor deficits in HI children, the procedure used to carry out a scoping review was: select keywords and identify synonyms, select databases and prepare the queries using keywords, analyze the quality of the works found using the PEDro Scale, classify the works based on psychomotor competences, analyze the interactive systems (e.g., sensors), and the achieved results. Results: Thirteen works were found. These works used a variety of sensors and input devices such as cameras, contact sensors, touch screens, mouse and keyboard, tangible objects, haptic and virtual reality (VR) devices. Conclusions: From the research it was possible to contextualize the deficits and psychomotor problems of HI children that prevent their normal development. Additionally, from the analysis of different proposals of interactive systems addressed to this population, it was possible to establish the current state of the use of different technologies and how they contribute to psychomotor rehabilitation. Introduction Hearing loss in children can vary from mild to profound [1]. Hearing damage can occur due to diseases, infections or vestibular damage producing hearing loss. This problem can interfere with the sensorimotor function, causing delays for hearing impaired (HI) children, in their psychomotor development, i.e., motor skill, balance, dynamic coordination, visual-motor coordination performance, among other aspects [2][3][4][5]. Psychomotricity integrates the cognitive, emotional, symbolical and physical interactions in the individual's capacity to be and to act in a psychosocial context. Motor development lays the basis for higher complex psychological abilities, such as emotion regulation or symbolism. Thus, the adequate acquisition of basic psychomotor areas, such as body schema (an essential part of body awareness, body image and self-esteem), gross motor skills (i.e., posture, balance), fine motor skills, space, time and rhythm are determinant for the development of cognition, emotion and social interactions. As hearing impairment may impact the psychomotor development of HI children, the objective of the present work was to revise the literature to compile and classify the systems used to help train psychomotor skills in HI children. Particularly, the aim of this review was to compile the endeavors being done in the field, to show which areas could be enriched with the use of technologies and interactive systems. Technology is being increasingly incorporated to rehabilitation processes, providing additional benefits such as providing real-time feedback or making the exercises more attractive, increasing adherence to treatment [6,7]. Psychomotor Limitations in Hearing Impaired Children Psychomotor abilities are usually classified into fundamental motor skills (i.e., body schema, body image, posture, balance and coordination), perceptual motor skills (i.e., space, time and rhythm) and cognitive skills (executive functions, such as memory or reasoning processes). The present study is focused on psychomotor abilities associated with motor skills, in relation with psychological processes. Fundamental Motor Skills HI children typically have lower scores on psychomotor scales than normal hearing (NH) children [2,8]. For instance, more than the 30% of HI children showed retardation in the acquisition of head control or independent gait [9]. Poor motor performance does not appear to affect self-efficacy in HI children [10], but it has been related to language deficiencies, poorer symbolic play, emotion dysregulation and social difficulties in interacting with other children [8,11]. Children with cochlear implanta show a drop in their gross motor performance coinciding with surgery, and a period of at least two years is needed to recover the developmental delay [12]. HI children exhibit worse gait performance than NH children, with abnormal ground reaction forces, higher propulsion and lower free movements [13,14]. Higher hearing impairment determines worse postural recovery and gait performance [14,15] and hearing aids and cochlear implants may help promote improvements in gait and stability during walking [16]. Body perception. Hearing impairment disturbs the experience of the own body and body-related abilities. Body perception deficits may play an important role in the action performance and be the cause of many unexplained daily difficulties suffered by HI children [17]. Posture is determined by the internal representation of the body in the surrounding space. Posture is permanently adjusted to environmental modifications by the continuous central integration of multisensory inputs that trigger motor commands for allowing stability. Thus, the unceasing inputs coming from the visual, vestibular and proprioceptive systems provide the brain with information of spatial context, head movement and position, and movement and position of the different body segments, respectively, which is crucial for posture maintenance and balance. HI children show higher postural instability and less ample head movements than NH children, which may indicate damage of the vestibular system [3,18]. Postural stability of HI individuals improves with adaptive sensory compensation (visual and vestibular) [19]; thus, HI children may benefit from exercise programs aimed at improving the body posture maintenance and balance control [20]. Balance is the ability to adapt postural control to be stable on different modifications of the environment. Balance develops during childhood, becoming a paramount parameter for the achievement of gross motor skills, such as running or jumping/standing on one leg [21]. Auditory inputs provide additional cues to control balance, creating a hearing "map" of surroundings that NH individuals use to maintain balance control and reduce postural sway [16,22]. HI children may experience balance difficulties, especially those with vestibular deficits [23,24] or within the first year of cochlear implants, when children exhibit higher rates of vestibular loss [25][26][27][28]. Thus, HI children have shown lower stability limits, faster and higher body sway, and higher energy expenditure to keep balance than NH children, indicating a deficit in static and dynamic balance [14,26,29,30]. HI children tend to use visual feedback in a higher amount than NH children, especially when balance is compromised by sensory disturbance (e.g., irregular surface) and the risk of falling increases [29,30]. Hearing aids, vestibular rehabilitation and physical exercise have proven effective to enhance vestibular adaptation and improve balance in HI children or after cochlear implant surgery [22,25,31]. HI individuals with additional vestibular deficits seem to exploit auditory cues in a higher degree, due to the reduced sensory redundancy [22]. On the other hand, vestibular dysfunction and its resulting balance deficits have been identified as risk factors for cochlear implant failures [32]. Coordination is defined as the global functionality of muscle groups, in a specific temporal order, resulting in the progressive contraction of agonists and the simultaneous inhibition of antagonists to achieve a motor outcome. Coordination is present in all motor functions (e.g., visuomotor, bimanual . . . ). In gross motor function, coordination is refined later in HI children, who will achieve accurate execution of large body actions (e.g., running) in older ages than NH children [12]. Thus, motor skills such as catching a ball, requiring visuomotor, spatial and temporal coordination, are impaired longer in HI children, with higher reaction times than NH children [33]. Auditory deprivation also affects motion perception [34] and motor sequence learning [35]. Fine motor function (i.e., manipulation or manual dexterity) experiences a delay when prelingually HI children grow up [35,36]. Associations between fine motor function and receptive and expressive language in HI children post-implant [36] suggest common brain networks and seem to indicate that auditory deprivation leads to atypical motor and language development. Visuomotor integration development is impacted by early auditory and linguistic experience and seem to elicit different cognitive resources in HI children [36]. Perceptual Motor Skills Spatial skills are defined as the mechanisms allowing the awareness of object position and its relationships in the environment [37]. HI children compensate auditory deprivation highlighting attention to stimuli in the near and far space visual fields (to visual central stimuli in far space and to peripheral visual stimuli in near space) [38,39]. Other adaptation mechanisms in HI children are higher location memory [40] and higher visual and tactile orientation in the allocentric frame of reference (encoding the position of an object in relation to others), which allows a fast attention to targets [38,41]. Nevertheless, the discrimination at midline and lateral positions [29] and the egocentric frame of reference (encoding the position of an object in relation to the own body) seem to be abnormal in HI children. In this sense, goal-directed movements towards the objects, based in the egocentric frame of reference, are slower than in NH [41]. In addition, auditory deprivation affects brain spatial organization; thus, in contrast with NH children, who have shown right hemisphere activation for spatial attention, an atypical bilateral or left hemisphere activation has been seen in HI children [40,42]. Furthermore, spatial cognition is promoted by an expertise in spatial language [43] and language absence results in poor performance on non-linguistic spatial tasks, particularly those combining different spatial representations [44]. For example, a consistent linguistic marking of left-right is associated with search under disorientation or a consistent marking of ground information is associated to search in rotated arrays in deaf signers [43]. In contrast with temporal abilities, spatial competence are likely to improve in HI children, particularly with the use of sign language [45]. Temporal skills recall the order, timing and sequence of stimuli [46]. Hearing loss diminishes the capacity of using fine temporal signals for recognizing speech and non-speech cues out of the variable environmental noise [47]. Temporal processing of proprioceptive and tactile signals is also compromised in children with HI [48]. Event-related brain potentials revealed less precise phonological representations of rhythm of oral language or location of sign language in HI, compared to NH children [49]. Nevertheless, cochlear implant users had similar performance in temporal organization tasks than NH individuals [50]. On the other hand, cross-modal plasticity driven by experience may allow HI individuals a normal performance when synchronizing temporally discrete visual stimuli and visual timing [51]. Search Strategy The following databases were used for the search: Science Direct, ACM Digital Library, IEEEXplore, NCI (PubMed, Bethesda, MD, USA), Springer Link and CiteSeerX. The following key words were used: auditory deficiencies, children, psychomotor, Human Computer Interaction (HCI), interactive systems, serious games and synonyms (e.g., hearing impairments, videogames, virtual systems). During the bibliographic analysis, we identified different contributions to the area of psychomotor training in HI children. Inclusion criteria were: studies whose title or abstract was related with helping train and enhance psychomotor skills in hearing impaired (HI) children, studies published in the last ten years in recognized journals or international events in the areas of health, biomedicine, computer science and HCI and studies describing the electronics or interactive elements used in systems aiming at the psychomotor skills improvement. Exclusion criteria were: studies addressing cognitive psychomotor skills and repeated studies. Thirty eight papers were identified in the first search (applying the key words), which were reduced to 13 after applying the inclusion and exclusion criteria. These contributions were classified into fundamental motor skills (posture, coordination and balance) and perceptual motor skills (spatial, temporal and rhythm). No references related to body image or body schema were found. Quality The PEDro Scale was used to evaluate the quality of the 13 papers found in the search and therefore, included in the review. This scale, based on the consensus of health experts, aims at identifying the works that have sufficient validity and statistical information to make their results interpretable [52]. The PEDro Scale evaluates the quality using eleven criteria, such as the selection criteria, the random assignment of the study subjects, the participation of the therapists and evaluators, or the comparison and visualization of the data. The criteria of the PEDro Scale assigns three values of quality: low risk of bias (1), unclear risk of bias (2), high risk of bias (3). experience may allow HI individuals a normal performance when synchronizing temporally discrete visual stimuli and visual timing [51]. Search Strategy The following databases were used for the search: Science Direct, ACM Digital Library, IEEEXplore, NCI (PubMed, Bethesda, MD, USA), Springer Link and CiteSeerX. The following key words were used: auditory deficiencies, children, psychomotor, Human Computer Interaction (HCI), interactive systems, serious games and synonyms (e.g., hearing impairments, videogames, virtual systems). During the bibliographic analysis, we identified different contributions to the area of psychomotor training in HI children. Inclusion criteria were: studies whose title or abstract was related with helping train and enhance psychomotor skills in hearing impaired (HI) children, studies published in the last ten years in recognized journals or international events in the areas of health, biomedicine, computer science and HCI and studies describing the electronics or interactive elements used in systems aiming at the psychomotor skills improvement. Exclusion criteria were: studies addressing cognitive psychomotor skills and repeated studies. Thirty eight papers were identified in the first search (applying the key words), which were reduced to 13 after applying the inclusion and exclusion criteria. These contributions were classified into fundamental motor skills (posture, coordination and balance) and perceptual motor skills (spatial, temporal and rhythm). No references related to body image or body schema were found. Quality The PEDro Scale was used to evaluate the quality of the 13 papers found in the search and therefore, included in the review. This scale, based on the consensus of health experts, aims at identifying the works that have sufficient validity and statistical information to make their results interpretable [52]. The PEDro Scale evaluates the quality using eleven criteria, such as the selection criteria, the random assignment of the study subjects, the participation of the therapists and evaluators, or the comparison and visualization of the data. The criteria of the PEDro Scale assigns three values of quality: low risk of bias (1), unclear risk of bias (2), high risk of bias (3). In the case of our review, papers following our inclusion criteria were scarce. The PEDro scale showed high values for unclear risk of bias (2) in most of the studies. In many of those studies, the evaluation with patients did not follow the standard procedures, which reduces the reliability of the data. One of the reasons explaining this fact, could be that most of the papers were published in conferences focused on human-computer interaction, therefore, the objective of the work was the presentation of the design and evaluation of the system, instead of analyzing a long-term progress of psychomotor skills. Figure 1 shows the quality of the works included in the review, according to the PEDro scale criteria. Fundamental Motor Skills In this section, we present the interactive systems and applications that aimed at training the fundamental motor skills of HI children, which involved coordination, posture and balance. Iversen and Kortbek [53] built an interactive floor for children with cochlear implants to interact with body movements. Their proposal includes a set of games aiming at language training and implant calibration. To rehabilitate the upper body limbs, Wille et al. [54] used a VR environment and tested the system with children during three weeks. The games presented in the environment, helped children to rehabilitate the motor skills without stress. Further, children improved their hand function. Marnik et al. [23] developed a system that can be used for therapy and education based on natural body movements and gestures interaction. The system uses computer vision techniques to detect the body movement and presents an attractive application to engage the user. The system is addressed to children with developmental problems, e.g., ASD or HI children, and presents simple physical exercises or gives instructions, such as "standing on a specified place and rise hands", for the child to follow. Egusa et al. [55] developed a system based on Microsoft Kinect to allow HI children to enjoy and participate in a puppet show, while developing their body expression. Radovanovic [56] examined the influence of specialized software on the visual-motor integration of profoundly deaf children. The evaluation was done with 70 children, 43 of whom formed an experimental group that used the computer once a week for five months. Results showed higher scores for the experimental group in a subtest of the Acadia test, but they were significantly higher only for seven-year-olds. Nevertheless, the authors supported the benefits of using videogames to improve visual-motor skills. Noorhidawati et al. [57] compiled situations in which children engage with mobile apps to better understand how they learn through such interaction. They carried out a qualitative approach by observing 18 pre-schoolers interactions when using 20 mobile apps. The analysis demonstrated that learning in this environment takes place through cognitive, psychomotor-based, and affective means. Further, the use of these apps can help in improving the aforementioned skills. To improve the body posture when performing physical exercises such as squats, Conner et al. [58] proposed a system based on computer vision techniques (via Microsoft Kinect). Results of the evaluation showed an improvement of the children's body posture when following the instructions of the system. Zhu et al. [59] used tangible objects to interact with digital elements in a role-playing collaborative game. Children trained their body expression and movements while playing with their teammates by fighting against the enemies, waving weapons according to different rhythms. Perceptual Motor Skills The rehabilitation of perceptual motor skills develops the perception of space, time and rhythm in HI children. However, few contributions were found by our review. To train rhythm, Jouhtimäki et al. [60] developed an educational tool which aimed at improving the children's skills to identify and produce rhythmic patterns. These abilities also support language perception and literacy. Pérez-Arevalo et al. [39] designed a similar game as the previous work, but they selected to deploy it in mobile devices. The proposal presents a game to train rhythm and coordination (visual motor) for HI children based on visual and auditory stimuli. They evaluated the system with nine children who had cochlear implants or used auditive aids. Children enjoyed the game and they all liked the emotional aspects of the design such as the customization of the main character, the graphical design of the character and the app or the storyline. Correa et al. [58] proposed the construction of an interactive multimodal system for rehabilitating skills such as hand-eye coordination, memory, rhythm and tempo. The proposal is a multimodal system, similar to a piano, that allows visualization and feedback of musical notes through vibrations and exploits the perceptual phenomenon of synesthesia, which relates colors and sounds. The goal of the system is the improvement of the rhythmic perceptual skills through coordination, visualization of colors, and tempo. Sogono and Richards [61] proposed the design of a game involving electronic elements, which would allow HI children to perform sound localization activities in closed spaces. In addition to this, they also proposed templates to facilitate the design of this type of multisensory environments. Finally, Aditya et al. [62] proposed a project to help children understand better abstract concepts such as time, from the use of tangible objects together with visual feedback. With the proposal they built an intelligent clock that represented each of the activities that the child carried out in his or her daily life. Table 1 shows the classification of the works according to the type of skill that they impact (fundamental motor skills and perceptual motor skills). Results We analyzed the sensors used in the systems found in the research to study the current technology used to work psychomotor skills in HI children. Computer Vision When the computer is able to "see", it can use the visual information for interaction purposes. The use of one or more cameras, i.e., webcams or Microsoft Kinect, allows the computer to sense the user and his/her actions, gestures or postures. Works by Egusa et al. [55] or Conner et al. [63] used RGBD cameras to sense the user's body posture and the gestures performed, whereas, the work by Marnik et al. [23] focuses on computer vision algorithms to recognize gestures. During the assessment, researchers concluded that the use of computer vision techniques in motor training allowed participants to be immersed in the proposed activities and they found them motivating. Interactive Floors Interactive floors are frequently classified into sensor-based or vision-based. In the case of Iversen and Kortbek [53], they combine both types to construct an interactive floor setup with vision tracking limb contact points from below the floor surface. During the evaluation of their system, the authors found that interactive floor-based systems invited participants to interact actively and collaboratively, allowing an exchange of knowledge and communication between users. It also offered children a way to create their own games related to the exercises they had to do, which increased motivation and engagement. Touch Screens Touch screens are present in mobile devices, tablets and mobile phones. The increase of these devices and the familiarity with this interaction mode helps children to focus directly on the tasks to carry out [39,62]. Researchers used tablets [39] or smart watches [62] to teach children rhythm and eye-hand coordination. During the evaluation, authors stated that interactive systems based on game actions related to activities of daily living (ADL), and with a smooth learning curve, allowed the children to quickly develop skills related to the rehabilitation goals in an amusing and motivating way. Mouse and Keyboard Mouse and keyboard are still the traditional input devices to desktop computers and have been used to control the interaction of applications for HI children [60]. During the evaluation of this proposal, authors found that using these devices in an interactive game supported children with some kind of disability to acquire rhythm patterns, which may enhance language development. Tangible Artifacts By using Arduino or similar hardware boards, researchers created artifacts for HI children to hold and manipulate, which controlled the interaction and informed about their position or movement through accelerometers [59]. In the evaluation of the system, participants were more motivated to participate collaboratively in the activity due to the use of tangible elements such as swords, shields and wands. However, some participants had difficulties due to the lack of tempo management and music knowledge, therefore, authors concluded that visual aids should be used. Multimodal Systems We also found sensors integrated in multimodal systems, which combined different input and output modalities, such as touch, computer vision and tangible objects together with visual and haptic feedback [58,61]. Researchers concluded that the use of non-traditional interaction devices made participants consider the activities to be entertaining and attractive because they invited to use different senses and effectors. VR Devices The use of VR devices such as data gloves and head-mounted displays (HMD) have also been used [54] to interact in a VR environment. During the evaluation [54], some participants presented evidence of improvement in activities proposed by the interactive system. However, despite the fact that participants showed a high degree of motivation when performing the different actions in the game, some did not progress as expected, showing a low level of progress in therapy. Computer Vision The results obtained by [23,55,63] were achieved thanks to the use of technologies and sensors that allowed the tracking of the different movements of the patient's body, offering an accurate feedback in real time of their movements and informing them of the correctness. Additionally, when presenting these exercises as play activities, participants felt more comfortable and motivated with the activity, because they were immersed in the exercise [64,65]. Interactive Floors The use of a contact surface, together with a projection and a camera, allowed the creation of different interactive spaces. Users had fun while performing physical activities [66]. Some of the results of the use of this technology can be observed in [53] where, based on the proposed interactive environment, children were more motivated to carry out the rehabilitation activities and the technology provided allowed them to be in a playful and collaborative environment with the freedom to make different activities. Touch Screens The use of touch technologies allowed patients to use their upper limbs to train fine motor skills and hand-eye coordination [67]. The use of this type of technology in the proposals [39,62] allowed participants to visualize and interact with the applications in real time while training fine motor and hand-eye coordination skills. Mouse and Keyboard The acquisition of motor skills in upper limbs can be carried out with traditional devices such as keyboards and mice. The game to train rhythm skills proposed by [60], allowed the user to perform each of the activities comfortably following the patterns and exercises offered by the game. Tangible Artifacts Tangible objects can be used in motor training and play activities because they count with different sensors such as accelerometers, gyroscopes and radio frequency identifiers, which allow patients to interact directly with them. Further, the system outputs in real-time video and audio accordingly to the patient's actions on the objects. The proposal by [59] clearly shows that the use of tangible objects contributed to a greater willingness to participate in the activities and collaborate with other children while performing the exercises and having fun. Multimodal Systems Multimodal systems allow the construction of interactive environments in which users can use different senses and carry out play activities that contribute to their psychomotor development [68][69][70][71]. This case can be clearly seen in [58,61] where the created devices involved different feedback, such as haptic and visual cues, that allowed the patient to perceive the information through multiple senses and react to the stimuli, causing the activity to be more engaging. VR Devices VR systems allow interact in a virtual world. Users can perform actions and receive a direct feedback through the objects of the virtual world with which they interact. This type of technology allows children with different types of psychomotor problems to carry out actions that cannot be done naturally in the real world, obtaining a satisfaction feeling while doing the activity. This fact is clearly observed in proposal [51], as the use of VR devices such as the glasses, helps children to participate in the rehabilitation process with a high level of motivation. Limitations This study has some limitations that must be taken into account for the correct interpretation of the results. The quality of the selected works, by the use of the PEDro scale, was performed only by one researcher; although PEDro criteria are clear and well-defined, the separate evaluation by different researchers would have allowed increasing the reliability of the scores. Conclusions HI children present deficits in their psychomotor development, which affect physical, social or emotional dimensions of their development. Therefore, research aiming at the improvement of the psychomotor skills of this population is needed. Based on an exhaustive review on psychomotor deficits in HI children, the aim of this work was to compile the endeavors being done in the field, to show which areas could be enriched with the use of technologies and interactive systems. During the review it was evidenced the use of different interaction technologies such as touch screens, tangible objects, traditional devices (mouse and keyboard), cameras and VR devices that together with different electronic elements like Arduino, RFID or buttons, allowed the creation of interactive environments and aimed to create more entertaining and motivating activities. The results of the analyzed proposals showed that HI children felt more satisfied with the activities. This feeling was due to the fact that the different technologies offered the children a real-time direct feedback on the performance of the different actions, which allowed them to feel part of their rehabilitation process. Thus, technology could increase the enjoyment of the rehabilitation sessions and increase the adherence to treatment, which is of paramount importance at these ages. Despite these results, this review showed that there is a lack of tools to support HI children during their therapy, due to reasons such as the cost of the technology or the limitation to access children with this kind of disability. This should encourage the scientific and academic community to actively participate in the generation of interactive systems that address the psychomotor development needs of HI children in a motivating and attractive context, offering interactive tools that use low-cost digital and electronic elements. In addition, studies showed a high prevalence of unknown risk of bias, which made difficult to test their success in improving psychomotor skills in HI children. Many reasons can cause this low quality: lack of long clinical trials, cost of certain technologies and creation of systems, etc. Investigating the barriers in the use of the technology in clinical settings would help identify the essential reasons. Moreover, long-term studies investigating the effects in the improvement of psychomotor skills would increase the quality of the present knowledge. Funding: This work has been partially supported by the project TIN2016-81143-R (AEI/FEDER, UE) funded by MICINN (Government of Spain) and OCDS-CUD2016/13 funded by the OCDS at the University of Balearic Islands.
2018-08-14T19:12:27.207Z
2018-08-01T00:00:00.000
{ "year": 2018, "sha1": "553ccdd57b50efb83e3045b06fe1e698577cc72c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/18/8/2546/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "553ccdd57b50efb83e3045b06fe1e698577cc72c", "s2fieldsofstudy": [ "Medicine", "Education", "Psychology" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
231654880
pes2o/s2orc
v3-fos-license
Spatio-temporal characteristics of frictional properties on the subducting Pacific Plate off the east of Tohoku district, Japan estimated from stress drops of small earthquakes The east coast of the Tohoku district, Japan has a high seismicity, including aftershocks of the 2011 M9 Tohoku earthquake. We analyzed 1142 earthquakes with 4.4≤MW≤5.0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$4.4 \le M_{W} \le 5.0$$\end{document} that occurred in 2003 through 2018 and obtained spatio-temporal pattern of stress drop on the Pacific Plate that subducts beneath the Okhotsk Plate. Here we show that small earthquakes at edges of a region with a large slip during the 2011 Tohoku earthquake had high values of stress drop, indicating that the areas had a high frictional strength and suppressed the coseismic slip of the 2011 Tohoku earthquake. In addition, stress drops of small earthquakes in some of the areas likely decreased after the 2011 Tohoku earthquake. This indicates that the frictional strength decreased at the areas due to the following aftershocks of the 2011 Tohoku earthquake, consistent with a high aftershock activity. This also supports that the frictional properties on a subducting plate interface can be monitored by stress drops of small earthquakes, as pointed out by some previous studies. Tectonics and characteristics of earthquakes off the east coast of the Tohoku district, Japan Numerous large earthquakes as well as the huge 2011 Tohoku earthquake with M W 9.0 have been observed off the east coast of the Tohoku district, Japan, associated with the subduction of the Pacific Plate beneath the Okhotsk Plate at a rate of 80-100 mm/year (DeMets et al. 1990). Some previous studies have suggested the spatial heterogeneity of the frictional properties on the plate interface in this region. Yamanaka and Kikuchi (2004) analyzed source processes of large interplate earthquakes that occurred off the east coast of the Tohoku district, Japan and found that areas with a large coseismic displacement, where they refer to as asperities, distributed as stepping stones. They also pointed out that the typical size of individual asperities in northeastern Japan was M7 class and that an M8 class earthquake could be caused when several asperities were synchronized. Uchide et al. (2014) investigated spatial pattern of stress drop before the 2011 Tohoku earthquake in the same region by a method different from that of our present study. They reported a lateral variation along the strike direction of the 2011 Tohoku earthquake and pointed out that a high stress drop zone was located just south of the large slip area of the earthquake, which possibly acted as a barrier to further rupture propagation during the 2011 Tohoku earthquake. They also found a strong increase in stress drop with depth between 30 km and 60 km, and that earthquakes in shallower and deeper Open Access *Correspondence: takuji.yamada.t9sci@vc.ibaraki.ac.jp 1 Faculty of Science, Graduate School of Science and Engineering, Ibaraki University, Ibaraki 310-8512, Japan Full list of author information is available at the end of the article depths had nearly constant stress drop. Nishikawa et al. (2019) analyzed waveforms of slow earthquakes observed by the new S-net ocean-bottom seismic network and investigated their spatial distribution along the Japan Trench. They found that the area that ruptured during the 2011 Tohoku earthquake was bounded by areas that have large numbers of slow earthquakes. They reported that a segmentation likely caused to cease the coseismic rupture of the 2011 Tohoku earthquake, which provides important information for a risk assessment from future major earthquakes. Baba et al. (2020) detected very low frequency earthquakes (VLFEs) off the Hokkaido and Tohoku Pacific coasts by a matched-filter technique. They pointed out that their spatial distribution is consistent with the afterslip of the 2003 Tokachi-Oki earthquake ( M W 8.0). They also found that the VLFE activity inside a large coseismic slip area of the 2011 Tohoku earthquake was low thereafter, whereas outside the area, VLFE activity increased after the 2011 Tohoku earthquake. These results suggest that there is significant spatial heterogeneity of the frictional properties on the plate interface in this region. As a lot of small earthquakes have been occuring on the subducting Pacific Plate off the east coast of the Tohoku district, they provide an excellent opportunity for studying the pattern of their stress drops in both space and time and its implication with respect to the frictional properties on the plate interface. We investigated stress drops of these small earthquakes in this study following the method of Yamada et al. (2010Yamada et al. ( , 2015Yamada et al. ( , 2017 and discussed the correlation of their spatial pattern with the coseismic slip distribution of the 2011 Tohoku earthquake and other large historical earthquakes, as well as their temporal change. In the next subsection, we summarize the significance of stress drop analysis. Significance of analysis on stress drop Stress drop is a fundamental and crucial source parameter which indicates the difference between the initial and residual stress levels associated with an earthquake. Let us consider what kind of information we can retrieve from values of stress drop. First, we will consider the effect of the initial stress. As the stress drop is the difference between the initial and final stresses, it would be affected by the heterogeneity of the initial stress. Here we have to note that a point with an extremely low initial stress will not be included inside the fault plane during an earthquake, because the dynamic stress concentration during the earthquake will not be enough to rupture the point. The heterogeneity of the initial stress will of course affect values of stress drop, but extreme heterogeneity of the initial stress will not be included in the stress drop estimates, because the value of stress drop will be estimated for each earthquake and will thus reflect the average characteristics over the fault plane. We will move on to the discussion of the effect of the dynamic stress level. Di Toro et al. (2011) showed that the dynamic frictional coefficient for regular earthquakes, whose slip rate will be around 1 m/s, will become less than 0.4. As the confining pressure would be around 200 MPa at a depth of 60 km, earthquakes with a depth of 60 km would have a value of dynamic stress level lower than 80 MPa. Because the huge heterogeneity of the initial stress will not be included in values of stress drop as we discussed earlier, this implies stress drops of these earthquakes may have the ambiguity of 80 MPa at most as an indicator of the difference between the shear strength and the dynamic stress level. This suggests that the spatial pattern of stress drop with the heterogeneity larger than the broad range of values of the dynamic stress level can be treated as the qualitative indicator of the spatial distribution of the shear strength. As a result, we would be able to treat values of stress drop as important physical proxies associated with the fault strength. Stress drops have been vigorously investigated and have almost confirmed that the earthquake rupture is mostly self-similar (Kanamori and Anderson 1975;Abercrombie 1995;Prieto et al. 2004;Yamada et al. 2005Yamada et al. , 2007Kwiatek et al. 2011;Yoshimitsu et al. 2014). This self-similarity supports the fact that we can investigate the difference between the shear strength and the dynamic stress level by the stress drop analysis, independent of the earthquake size. We have to note here that the self-similarity of earthquakes for a broad range of magnitude remains a matter of debate and is not fully confirmed. Some studies have pointed out that earthquakes might have a weak dissimilarity (Malagnini et al. 2008). However, small earthquakes with a narrow range of magnitude, as investigated in this study, do not indicate a strong dissimilarity and their stress drops provide important information on frictional properties. Heterogeneity of stress drop in space and time has been investigated, especially for the last fifteen years. Allman and Shearer (2007) estimated stress drops of small earthquakes and discussed their variations in space and time around the source region of the 2004 Parkfield, California, earthquake. They pointed out that earthquakes around the coseismic rupture area of the 2004 Parkfield earthquake had relatively higher values of stress drop. For even finer scale, Yamada et al. (2010) calculated stress drops of small earthquakes which took place on the fault plane of the 2006 Kīholo Bay earthquake with M W 6.7 in Hawaii and discussed the spatial characteristics of stress drop compared to the coseismic slip distribution of the earthquake. They reported that small earthquakes around patches with a large displacement during the main shock had larger values of stress drop and concluded that the spatial pattern of the stress drop reflects coherent variations in the difference of strength and the residual stress level. Urano et al. (2015) carried out a similar analysis for the 2007 Noto Hanto earthquake and pointed out that static stress drops of aftershocks in the area with a large coseismic slip during the mainshock are higher than those in a small slip area. Their result also suggests that in-situ frictional properties can be estimated from stress drops of small earthquakes, supporting the conclusion of Yamada et al. (2010). Oth (2013) pointed out that stress drops have a strong correlation to the spatial pattern of heat flow in Japan. Yamada et al. (2015) investigated an activity of swarm-like earthquakes in the Tanzawa Mountains region, Japan and found that it showed the hypocenter migration and consisted of earthquakes with a small stress drop. They concluded that the activity would be triggered by the increase of pore pressure due to fluid, that is, the decrease of the shear strength. Yamada et al. (2017) found that the spatial pattern in stress drops has a good correlation with spatial characteristics of coseismic displacements during individual historical large earthquakes off the Pacific coast of Hokkaido, Japan by the stress drop analysis of small earthquakes. Moyer et al. (2018) estimated stress drops of small earthquakes ( 2.3 ≤ M W ≤ 4.0 ) on Gofar transform fault at the East Pacific Rise and found an inverse correlation between stress drop and the reduction of P wave velocity, which they interpreted as the effect of damage around the fault zone. They also pointed out that earthquakes following the mainshock ( M W 6.0 ) had lower values of stress drop, consistent with increased damage and decreased fault strength after a large earthquake. Although estimation of stress drop in general includes some assumptions such as circular faults explained in the next section, these previous studies strongly suggest that the results of stress drop indicate actual physical characteristics on fault planes of earthquakes. Some studies, on the other hand, raised questions if the values of estimated stress drop reflect frictional properties. Hardebeck (2020) compiled stress drop catalogs of small earthquakes ( 1.8 ≤ M ≤ 3.1 ) in Southern California, including results of Shearer et al. (2006) and Hardebeck and Aron (2009), and investigated their spatial correlation to stress drops of moderate-to-large earthquakes, as well as the correlation among catalogs. She confirmed that stress drops of larger earthquakes would be predictable from values of stress drop for nearby smaller earthquakes. However, she also pointed out that the spatial correlation was weak between stress drops of moderate-to-large earthquakes and those of small earthquakes, suggesting that the spatial pattern of stress drop derived from the analysis of small earthquakes might be less useful for the prediction on rupture characteristics of future large earthquakes. As results of stress drop can be compared to coseismic displacements of several large earthquakes off the east coast of Tohoku district, Japan, our study will provide an example whether or not stress drops estimated from seismograms can contribute to the investigation of frictional properties on earthquake faults. Methods Following the method of Yamada et al. (2010Yamada et al. ( , 2015Yamada et al. ( , 2017, we investigated stress drops of 1142 small earthquakes ( 4.4 ≤ M ≤ 5.0 ) off the east coast of Tohoku district, Japan, that occurred in 2003 through 2018 (Fig.1). Both earthquakes before and after the 2011 M9 Tohoku earthquake are included in the analysis. Note that M indicates the magnitude of an earthquake as determined by the Japan Meteorological Agency (JMA) in this paper. The hypocenters of the 1142 earthquakes were located 35.0 • N through 41.5 • N and 140.5 • E through 145.0 • E in latitude and longitude directions, respectively, with a depth of ± 15 km from the interface of the Pacific Plate, which had been derived by Nakajima and Hasegawa (2006). Because of the poor azimuthal coverage of seismic stations, the distance of 15 km in depth direction is within the range of uncertainty in the hypocenter estimation in the study area. We analyzed waveforms recorded at stations, which have been maintained by National Research Institute for Earth Science and Disaster Resilience, Japan (NIED), Hokkaido University, Hirosaki University, Tohoku University, and JMA. A recorded seismogram as a function of time W(t) consists of the effects of the source S(t), the path from a hypocenter to a seismic station P(t), site amplification effects A(t), and the instrumental responce of a seismometer I(t), that is: where the operator * indicates convolution. Because the convolution is expressed as a scalar product in the frequency domain, the following equation holds: where W(f), S(f), P(f), A(f), and I(f) are expressions of W(t), S(t), P(t), A(t), and I(t) in the frequency domain, respectively. If we know the functions of P(f), A(f), and I(f), we can then obtain the Green's function and extract the source term from the observed seismogram. As it is difficult in actual to estimate the Green's function precisely, we adopted the method of empirical Green's function (EGF) (Hartzell 1978 (1) The observed seismograms of two earthquakes at a receiver can be expressed as follows: If the two earthquakes are colocated in space, the soil beneath the seismic station acts linearly independent of the amplitude of the incoming waveforms, and no velocity change takes place during the two earthquakes, then the path and site effects in Eqs. (3, 4) are exactly the same, which shows and In this case, we can derive the ratio of source effects on two earthquakes by calculating the ratio of the observed seismograms in the frequency domain, Equation (7) gives the spectral ratio for each pair of recorded waveforms. It is assumed in this study that source spectrum of an earthquake S C (f ) can be expressed by the omega-squared model of Boatwright (1978), which is formulated as follows: where R, M 0 , and f 0 are the coefficient of the radiation pattern, the seismic moment, and the corner frequency of the earthquake, respectively. C indicates the wave type, which corresponds to either P or S. This assumption suggests that we approximated the fault as a circular plane. On the spectral amplitude as a function of frequency, the source model of Boatwright (1978) has shaper bend around the corner frequency than the model of Brune (1970), which is also widely used as an omega-squared model. This characteristic of the Boatwright source model reduces the ambiguity in estimating the corner frequency, resulting in the smaller error in the stress drop analysis. This is why we adopted the Boatwright source model as an omega-squared model. Further discussion will be made in the section "Discussion". The deconvolved spectra of velocity |u C r (f )| can then be expressed by the following equation: where subscripts A and E correspond to analyzed and EGF earthquakes, respectively. Moreover, R C r and M 0r indicate the relative values of R C A /R C E and M 0A /M 0E , respectively. The value of R C r is equal to 1 if the analyzed and EGF earthquakes are colocated and have exactly the same focal mechanisms. Please note that the similarity of focal mechanisms for analyzed and EGF earthquakes is not necessarily required in our analysis, because the product P(f ) · A(f ) · I(f ) in Eqs. (2, 3, 4, 5, 6, 7) is independent of the focal mechanism. Also, the discrepancy of focal mechanisms is considered by estimating values of R C r M 0r in Eq. (9) for individual earthquake pairs. The sampling frequency of waveforms analyzed in this study was 100 Hz. We used waveforms of earthquakes with M3.5 in 2012 through 2018 (after the 2011 Tohoku earthquake) which were closest to the hypocenters of the analyzed earthquakes as the EGFs. A list of analyzed and EGF earthquakes in this study is available as an additional file (refer to Additional file 1). We adopted M3.5 earthquakes as EGFs and analyzed corner frequencies of earthquakes in a relatively narrow magnitude range ( 4.4 ≤ M ≤ 5.0 ) because of the following considerations. In order to ensure a good signalto-noise ratio of EGFs, especially for spectra of lower frequencies, we used waveforms of earthquakes with M3.5 as EGFs. The lower limit (M4.4) of the analyzed earthquakes was set for keeping a difference in magnitude of about 1 compared to the EGFs and ensuring quality in estimating the corner frequencies. As stress drops are calculated for individual earthquakes, the values for large earthquakes would represent the average characteristics of individual large fault planes. This is not good condition because the values of stress drop for large earthquakes might not reflect local frictional characteristics. We can avoid this problem by adopting an upper limit of magnitude for analyzing stress drops of earthquakes. We then fixed the maximum size of earthquake in the analysis to M5.0, whose source dimension is a couple of kilometers in general. Ideally, source areas of analyzed and EGF earthquakes should be overlapped so that we can retrieve the ratio of the source effect from Eqs. (7,8,9). Referring to the Additional file 1, the maximum distance between the analyzed and EGF earthquakes was 53.6 km. Although this absolute value implies that the two earthquakes were not close very much for this pair, it is acceptable in taking , the ambiguity in the hypocenter determination for earthquakes far off the coast as well as the distance between the hypocenter and a seismic station into account. In addition, the file Additional file 1 shows that 95% of the earthquake pairs analyzed in this study have the distance between hypocenters less than 20 km. Compared to the distance between hypocenters of analyzed earthquakes and seismic stations, which is a few hundred kilometers, the distance between hypocenters of analyzed and EGF earthquakes (less than 20 km) suggests that the EGF earthquakes are valid and the results in this study are reliable. We estimated the spectral ratios of P and S waves for individual pairs of an earthquake and an EGF. The spectral ratios were analyzed for three time windows with a length of 1024 data points, or 10.23 s. The begining of the first time window were set to be 0.50 s prior to the arrival times of either the P or S waves. The elapsed times of the two successive time windows were set to be 1.28 and 2.56 s, respectively. The spectral ratio can be approximated as following equations by taking the logarithm of Eq. (9): Before fitting individual analyzed spectral ratios with the theoretical function expressed by Eqs. (10, 11), we resampled the data points so that the interval in frequency was equal to 0.05 on a log 10 scale. As a result, we had 20 data points (frequency bands) for each order of frequency. This procedure allowed us to treat high-and low-frequency data equivalently. We also estimated the standard deviation of the spectral ratio for each frequency band and used the value as a weight in fitting data, as explained below. We investigated the values of R C r M 0r , f C 0A , and f C 0E in Eq. (11) for each station by a grid search that gave the minimum residual for the spectral ratios of three time windows, which is a similar procedure as in Imanishi and Ellsworth (2006). Here values of residual R can be defined by the following equation, where σ i is the standard deviation for each frequency band calculated in resampling the spectral ratio. All of the earthquakes analyzed in this study have four or σ 2 i more stations available for calculation of the corner frequency. We used data within the frequency range of 0.7 Hz through 20 Hz for calculating the residual in Eq. (12) and investigated corner frequencies by grid search from 0.3 to 20 Hz. This frequency range and the length of the time window (10.23 s, or 1024 data points) were adopted so that corner frequencies of earthquakes with 4.4 ≤ M ≤ 5.0 could be estimated correctly, which would be around 1-3 Hz as expected by the self-similarity of earthquakes. Figure 2 shows an example of the corner frequency analysis, including S-wave velocity seismograms, their spectra, deconvolved spectra after the resampling, and the obtained curves of spectral ratio that were used for investigating a corner frequency. Another example is provided as a supplemental figure Fig. 10, which shows the analysis for the P wave for the same earthquake and the same station as shown in Fig. 2. Examples for another earthquake are also provided as supplemental figures (Figs. 11 and 12). We confirm that waveforms have a good signal-to-noise ratio larger than a factor of five even for low frequencies between 0.7 and 2 Hz for EGF earthquakes. Finally, we estimated the values of stress drop following Madariaga (1976): where V S is the shear wave velocity, which we set 4.5 km/s refering to Matsubara and Obara (2011), and C corresponds to the wave type (P or S). As Eq. (11) gave values of corner frequency f C 0A for individual stations, we calculated stress drop for each earthquake as the log average of values derived from Eq. (13) for individual stations. The seismic moment M 0 in newton meters (Nm) can be calculated from M W using the following equation Hanks and Kanamori (1979): log 10 M 0 = 1.5M W + 9.1 . We fixed the value k as 0.32 and 0.21 for P and S waves, respectively, assuming that the rupture of earthquakes expanded with a speed of 0.9V S (Madariaga 1976). It is assumed in this study that M is equivalent to the moment magnitude M W in calculating stress drops. The effect of (13) the value k on our analysis, as well as the validity of the assumption of M = M W will be discussed in the section "Discussion. " Spatial pattern of stress drop Figures 3, 4 show the spatial distribution of stress drop estimated from P and S waves, which are superposed on the coseismic slip distribution of large earthquakes, including the 2011 Tohoku earthquake. The results of individual earthquakes in Fig. 3a, c are available as Additional files 2, 3. Both of the results derived from P and S waves indicate the quite similar pattern of spatial heterogeneity on stress drop (Fig. 3e, f ), suggesting that the frictional properties on the Pacific plate are heterogeneous in space. Please note that though we carried out the grid search of the corner frequency for the frequency range between 0.3 and 20 Hz, estimated corner frequencies are roughly in the range of 1 to 7 Hz. This suggests that the results are within the range of resolvable corner frequency and do not deviate from the corner frequency fitting criterion of Abercrombie et al. (2017) and Ruhl et al. (2017). We found that areas with a large coseismic displacement during the 1968 Tokachi-oki earthquake have higher values of stress drop. This is consistent with the result of Yamada et al. (2017) and suggests that these areas have higher shear strength. Similarly, an area with a large stress drop can be recognized at the south-east tip of the coseismic displacement during the 1978 Miyagioki earthquake. It is likely that the area acted as a barrier because of a higher shear strength in 1978, whereas it had been included inside the source area in 1936. In addition, the areas marked as A and B in Fig. 4 have a higher value of stress drop. Both of the areas coincide with the regions where moderate-sized earthquakes regularly take place every several years (Uchida et al. 2007;Okuda and Ide 2018). These results can also be reasonably explained that there is significant spatial heterogeneity in frictional properties and these areas have a higher shear strength. However, we have to note that slip distributions obtained by the waveform inversion may include large Fig. 2 a Example of an analyzed waveform of an earthquake with M4.8. The horizontal color bars show three time windows used in obtaining spectra, which were (S0) -0.50 to 9.73 s, (S1) 0.78 to 11.01 s, and (S2) 2.06 to 12.29 s after the arrival time of S wave. The gray line indicates a time window from 12.00 to 1.77 s before the P arrival, which was used to calculate the noise spectrum in (b). Individual time windows include 1,024 data points. b Waveform spectra for the four time windows marked in (a). c Example of a waveform of an M3.5 earthquake that was used for an EGF. Note that the vertical scale is different from that in (a). d Waveform spectra for the four time windows marked in (c). e Deconvolved spectra with the best-fit omega-squared model. The color lines show deconvolved source spectra, that is, (b) divided by (d), for three individual time windows with a resampling of frequency bands. The black broken line indicates the best-fit omega-squared model with corner frequencies of 1.0 and 2.5 Hz for the analyzed and EGF earthquakes, respectively (Mai et al. 2016). The significance of the results, as well as the discrepancy between the absolute values of stress drop derived from P and S waves will be discussed in the section "Discussion. " Temporal change of stress drop: Effects of the 2011 Tohoku earthquake We would also like to point out the temporal change in stress drop associated with the 2011 Tohoku earthquake. Figure 5 shows spatial patterns of stress drop before and after the 2011 Tohoku earthquake, as well as that estimated from all the earthquakes analyzed in this study (2003 through 2018). Areas marked as C and D, which correspond to western tips of the large coseismic displacement during the 2011 Tohoku earthquake, have higher values of stress drop before the earthquake (Fig. 5a). After the 2011 Tohoku earthquakes, values of stress drop in these areas decreased to the value around the average all over the study area as shown in Fig. 5b. It was reported that the interplate seismicity became lower just after the 2011 Tohoku earthquake (Hawegawa et al. 2011). This might cause an apparent temporal change as if stress drops increased after the 2011 Tohoku earthquake if we could hardly distinguish intraplate and interplate earthquakes, because intraplate earthquakes have in general higher values of stress drop than those for interplate ones. We have checked depths and focal mechanisms of earthquakes in areas C and D on NIED F-net website (https ://www.fnet.bosai .go.jp/event /joho.php?LANG=en), and confirmed the temporal changes in these areas are not apparent ones associated with the temporal change of seismicity on interplate earthquakes. We discuss these results from physical point of view in the next section. Interpretation of our results from the physical viewpoint of earthquake rupture We found that earthquakes at edges of a large coseismic slip with a large gradient in displacement during the 2011 Tohoku earthquake show a high value of stress drop. As mentioned in the subsection 1.2, stress drop is an indicator of the difference between the shear strength and the dynamic stress level. Therefore, the spatial pattern obtained in this study likely to reflect the spatial heterogeneity in frictional properties on the plate interface between the Pacific and Okhotsk plates. In addition, stress drops of earthquakes at the above area seem to decrease after the 2011 Tohoku earthquake. This would indicate a gradual weakening of the shear strength due to the stress concentration associated with the coseismic slip during the 2011 Tohoku earthquake (Ohnaka and Shen 1999). Another possibility would be an effect of fluid (Yamada et al. 2015). As fluid confined in a crack on the plate interface can reduce the normal stress, the increase of fluid pressure can decrease the shear strength, resulting in a smaller stress drop. It is true that a smaller stress drop would be observed if the dynamic stress level increased for some reason. However, it would be hard to be the case. As we commented in the subsection "Significance of analysis on stress drop", Di Toro et al. (2011) pointed out that the frictional coefficient depends on the slip velocity; a slower slip velocity gives a higher frictional coefficient. Their results suggest that an earthquake with a slower slip velocity would have a higher dynamic stress level. If the decrease of stress drop were caused by a slow-down of the slip velocity, all the events after the 2011 Tohoku earthquake had to have a slower slip velocity. If this were the case, observed waveforms would have shown some notable change of their characteristics, which we did not observe at all. Therefore, we conclude that the observed temporal change in stress drop would be caused not by the increase of the dynamic stress level, but the decrease of the shear strength due to the stress concentration associated with the coseismic slip of the 2011 Tohoku earthquake. Difference of stress drops from P and S waves The value of stress drop for an earthquake calculated from P wave should be the same as that derived from S wave, because each earthquake has one value of stress drop. However, our results in Figs. 3, 4 indicate that the absolute values of stress drop deduced from P waves are much lower than the values derived from S waves. One possibility is that corner frequencies for P waves were underestimated because of the limited bandwidth of frequencies. If this is the case, difference of stress drops derived from P and S waves should become larger for smaller earthquakes, which in general have higher corner frequencies. Another possibility is that this discrepancy originates from the value of k in Eq. (13), that is, the rupture speed assumed in the analysis. The proportionality of stress drops estimated from P and S waves, which is clearly shown in Fig. 3e, f, supports that the latter possibility is the main reason in our analysis. As a result, the discrepancy provides an insight into the rupture characteristics of analyzed small earthquakes (Yamada et al. 2017). We briefly explain the significance of the discrepancy here. The model of Madariaga (1976), which is used in this study and is commonly adopted in stress drop estimation, assumes that the rupture initiates from the center of a circular fault plane and propagates with a certain rupture speed that is slower than the P-wave velocity. The values of 0.32 and 0.21 for constant k in Eq. (13) depend on the assumed rupture speed, which we fixed to be 90% of the shear wave velocity V S . These factors become smaller for a slower rupture propagation, and the value k for P wave is much more sensitive to the rupture speed than that for S wave (Madariaga 1976). Because the results of stress drop from P waves in our study are smaller than those from S waves, slower rupture speeds let the values of stress drop to be closer. Thus, results in this study suggest that the actual rupture speed of the analyzed earthquakes would be slower than 0.9V S . This is consistent with many studies of source process that reported the rupture on a fault plane propagates with a speed of 70 − 80% of V S , independent of earthquake size (Wald and Heaton 1994;Yamada et al. 2005), with a couple of exceptions, including Ji et al. (2002) and Walker and Shearer (2009). Stress drops derived from P and S waves in Figs. 3, 4 show exactly the same spatial pattern. This strongly indicates that values of stress drop in this study are stably estimated and that their lateral characteristics suggest the spatial heterogeneity of the frictional properties on the interface of the subdcting Pacific Plate in the study area. Validity associated with the assumption on earthquake magnitude We estimated values of stress drop under the assumption that the moment magnitude M W is equivalent to the value of M determined by JMA. Here we investigate the validity of this assumption because it might throw an artifact in our results. Fig. 7 shows the relationship between M and values of 3.5 + (2/3) log 10 R C r M 0r in Eq. (9) for individual earthquakes, which correspond to values of moment magnitude for analyzed earthquakes if individual pairs of analyzed and EGF earthquakes have identical focal mechanisms ( R C r = 1 ) and if the values of M W for EGF earthquakes are 3.5 (M3.5 = M W 3.5). We herein refer to the value of 3.5 + (2/3) log 10 R C r M 0r as a b Fig. 7 Relationship between the JMA magnitude and the apparent magnitude derived as the sum of the magnitude of EGF earthquakes (3.5) and values of (2/3) log R C r M 0r in Eq. (9). Results in (a, b) show those derived from P and S waves, respectively. Red circles indicate average values of apparent magnitudes for individual JMA magnitude bins apparent magnitude. As we adopted earthquakes with M3.5 as EGFs, Fig. 7 suggests that the actual values of M W would be slightly smaller than M values, which is consistent with Uchide and Imanishi (2018). This result implies that the absolute values of stress drop derived in this study might be overestimated. However, Fig. 7 shows a clear linear correlation between the two parameters with a slope of 1. This fact strongly supports that the spatial pattern in stress drop obtained in this study is stable and reliable. Uchide et al. (2014) investigated spatial pattern of stress drop before the 2011 Tohoku earthquake in the same region by a method different from that of this paper. The spatial distribution of stress drop before the 2011 Tohoku earthquake in our results is very much consistent with that shown in Uchide et al. (2014), suggesting the robustness of our analysis. However, there is a discrepancy between the two results. Although Uchide et al. (2014) insisted that they found a strong increase in stress drop with depth between 30 km and 60 km, such an increase cannot be seen in our result (Fig. 6). We are not sure of the reason, but one possibility would be the difference of waveforms used as empirical Green's functions. We used waveforms of earthquakes (M3.5) in the vicinity of individual analyzed earthquakes with a good signal-to-noise ratio for all over the frequency band used in the analysis (Figs. 2,10,11 and 12) as local empirical Green's functions so that fine-scale heterogeneity can be detected. Cocco et al. (2016) complied stress drops calculated based on several source models, such as Brune (1970) and Madariaga (1976). Comparing to results of other studies complied in Cocco et al. (2016), results of this study have a wider variety of stress drops, including extremely high values for several earthquakes. One reason is that the rupture model of Madariaga (1976) is assumed in this study. Another reason would be that we used the model of Boatwright (1978) as an omega-squared model in estimating corner frequencies from spectral ratios. Further discussion will be made in the following two subsections. Comparison to previous studies Accumulation of the in-situ seismic data, including ones observed by S-net (Kubota et al. 2020), would provide good opportunity for investigating source characteristics, such as stress drop, of earthquakes with higher precision in the region analyzed in this paper in the near future. Source models proposed by Brune (1970) and Boatwright (1978): effects in estimation of corner frequencies We used the source model of Boatwright (1978) in this study, which has an omega-squared spectrum. Another source model of Brune (1970) is also widely used as an omega-squared model, which shows the broader change in spectral amplitude around the corner frequency. Here we overview the difference of the two models and discuss effects on estimating corner frequencies. Spectral ratio of the model of Brune (1970) is expressed by the following equation, Brune (1970) (blue) and Boatwright (1978) (red) as a function of frequency. Two panels have different corner frequencies of analyzed and EGF earthquakes ( f C 0A and f C 0E ); (a) 2.0 and 6.0 Hz and (b) 2.0 and 30.0 Hz, respectively, which are shown by black arrows. The factor R C r M 0r in Eqs. (9) and (14) is set to 10.0. We can clearly see that spectral ratios get asymptotically close to values of R C r M 0r and R C r M 0r · f C 0A /f C 0E 2 for f → 0 and f → +∞ , respectively which is similar to Eq. (9) for the model of Boatwright (1978). Fig. 8 shows synthetic spectral ratios for two pairs of waveforms in the frequency domain, following Eqs. (9) and (14). Blue and red lines indicate the ratios associated with models of Brune (1970) and Boatwright (1978), respectively. Functions in Fig. 8a, b have different corner frequencies as shown by black arrows; (a) 2.0 and 6.0 Hz, and (b) 2.0 and 30.0 Hz for larger (analyzed) and smaller (EGF) earthquakes. The factor R C r M 0r in Eqs. (9) and (14) is set to 10.0. We can clearly see that the slope in proportion to f −2 between the two corner frequencies f C 0A and f C 0E is unclear for the model of Brune (1970). This suggests that the model of Brune (1970) may include larger ambiguity in estimating the corner frequencies and the value of R C r M 0r for spectral ratios with a clear proportionality to f −2 between the two corner frequencies, such as examples shown in Figs. 2 and 10 in this study. This is a reason why we adopted the model of Boatwright (1978) in this study as an omega-squared source model. We have to note another point associated with estimated values of corner frequencies, or stress drops. Fig. 9 shows two examples of spectral ratios with different corner frequencies. Blue and red lines indicate spectral ratios for the source model of Brune (1970) with corner frequencies of 1.6 and 40.0 Hz, and that of Boatwright (1978) with corner frequencies of 2.0 and 30.0 Hz, respectively. Values of R C r M 0r for models of Brune (1970) and Boatwright (1978) are equal to 10.0 and 7.0, respectively. These two functions give similar spectral ratios for the frequency range from 0.7 to 20.0 Hz, which we used in estimating corner frequencies. This suggests the possibility that corner frequencies of analyzed earthquakes estimated from the model of Boatwright (1978) might be higher than those derived from the model of Brune (1970). This difference of corner frequency can result in significant discrepancy in stress drop estimates by the two source models, because the value of stress drop is proportional to the value of corner frequency to the third. This may be one reason why our results include extremely high values of stress drop for several earthquakes, compared to results of Cocco et al. (2016). Please note that this does not cause any problem in discussing relative characteristics of stress drop values in space and time. Rupture models and their effects in estimating stress drops A model proposed by Brune (1970) is widely used for stress drop analyses. The model assumes the rupture on a circular fault happens instantly, that is, the rupture speed is infinity. It is an excellent and simple model, but it neglects a fundamental concept of the continuum mechanics that the rupture speed does not exceed the P-wave velocity. This is why we used a model of Madariaga (1976) in estimating stress drops. Please note that the absolute values of stress drop derived from the model of Madariaga (1976) become 5.6 times larger than the values estimated by the model of Brune (1970). This is why values of stress drop in the present study are 5-10 times higher than those in other studies on stress drop. Conclusion We summarize our conclusion as follows. 1. Areas with a high stress drop are located at edges of a large coseismic slip with a high gradient in displacement during the 2011 Tohoku earthquake. 2. The areas with a high stress drop showed temporal change on values of stress drop after the 2011 Tohoku earthquake. 3. The fact (2) may indicate a gradual weakening of the shear strength due to the stress concentration associated with the coseismic slip during the 2011 Tohoku earthquake (Ohnaka and Shen 1999). 4. Temporal change in stress drop likely to reflect the change in the shear strength. As the dynamic stress 10 2 10 1 10 0 10 -1 Frequency (Hz) Ratio of Amplitude 10 1 10 0 10 -1 10 -2 Fig. 9 Two examples of spectral ratios with different corner frequencies. Blue and red lines indicate spectral ratios for the source model of Brune (1970) with corner frequencies of 1.6 and 40.0 Hz, and that of Boatwright (1978) with corner frequencies of 2.0 and 30.0 Hz, respectively. These corner frequencies are shown by blue and red arrows. Values of R C r M 0r for models of Brune (1970) and Boatwright (1978) are equal to 10.0 and 7.0, respectively. These two functions give similar spectral ratios for the frequency range from 0.7 to 20.0 Hz, which we used in estimating corner frequencies Yamada et al. Earth, Planets and Space (2021) Fig. 2a. The horizontal color bars show three time windows used in obtaining spectra, which were (P0) -0.50 to 9.73 s, (P1) 0.78 to 11.01 s, and (P2) 2.06 to 12.29 s after the arrival time of P wave. Please refer to the figure caption of Fig. 2 for the gray line. b Waveform spectra for the four time windows shown in (a). c Waveform of the UD component at the station N.TROH for the same earthquake as shown in Fig. 2c. d Waveform spectra for the four time windows shown in (c). e Deconvolved spectra with the best-fit omega-squared model. The color lines show deconvolved source spectra, that is, ( b) divided by ( d), for three individual time windows with a resampling of frequency bands. The black broken line indicates the best-fit omega-squared model with corner frequencies of 1.0 and 3.2 Hz for the analyzed and EGF earthquakes, respectively Yamada et al. Earth, Planets and Space (2021) Fig. 11 a Example of an analyzed waveform of an earthquake with M4.4, which was recorded by the UD component at the station N.IWEH. b Waveform spectra for the four time windows shown in (a). c Example of a waveform of an M3.5 earthquake that was used for an EGF. Note that the vertical scale is different from that in (a). d Waveform spectra for the four time windows shown in (c). e Deconvolved spectra with the best-fit omega-squared model. The color lines show deconvolved source spectra, that is, (b) divided by (d), for three individual time windows with a resampling of frequency bands. The black broken line indicates the best-fit omega-squared model with corner frequencies of 10.0 and 15.8 Hz for the analyzed and EGF earthquakes, respectively Yamada et al. Earth, Planets and Space (2021) 10 -6 S0 S1 S2 Noise 10 2 S0 S1 S2 Noise Fig. 12 a Waveform of the NS component at the station N.IWEH for the same earthquake as in Fig. 11. b Waveform spectra for the four time windows shown in (a). c Example of a waveform of an M3.5 earthquake that was used for an EGF. Note that the vertical scale is different from that in (a). d Waveform spectra for the four time windows shown in (c). e Deconvolved spectra with the best-fit omega-squared model. The color lines show deconvolved source spectra, that is, (b) divided by (d), for three individual time windows with a resampling of frequency bands. The black broken line indicates the best-fit omega-squared model with corner frequencies of 6.3 and 12.6 Hz for the analyzed and EGF earthquakes, respectively
2020-07-09T09:06:20.886Z
2020-07-02T00:00:00.000
{ "year": 2021, "sha1": "ca330989c04452fff97d2d25ed5fe0ea131c97cc", "oa_license": "CCBY", "oa_url": "https://earth-planets-space.springeropen.com/track/pdf/10.1186/s40623-020-01326-8", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e0f1b5d8d6c6afcbc3664d09afe120c1d5c064d3", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [] }
27428364
pes2o/s2orc
v3-fos-license
Trends in Medicinal Uses of Edible Wild Vertebrates in Brazil The use of food medicines is a widespread practice worldwide. In Brazil, such use is often associated with wild animals, mostly focusing on vertebrate species. Here we assessed taxonomic and ecological trends in traditional uses of wild edible vertebrates in the country, through an extensive ethnobiological database analysis. Our results showed that at least 165 health conditions are reportedly treated by edible vertebrate species (n = 204), mostly fishes and mammals. However, reptiles stand out presenting a higher plasticity in the treatment of multiple health conditions. Considering the 20 disease categories recorded, treatment prescriptions were similar within continental (i.e., terrestrial and freshwater) and also within coastal and marine habitats, which may reflect locally related trends in occurrence and use of the medicinal fauna. The comprehension of the multiplicity and trends in the therapeutic uses of Brazilian vertebrates is of particular interest from a conservation perspective, as several threatened species were recorded. Introduction Wildlife represents an immeasurable source of raw materials that support health systems of different human cultures that depend on nature as a source of medicines to treat and cure illnesses [1]. Plants and animals have been used as medicinal sources since ancient times, and even today animal-and plant-based pharmacopeias continue to play an essential role in health care. Although plants and plant-derived materials make up the majority of the ingredients used in most traditional medical systems globally, whole animals, animal parts, and animal-derived products also constitute important elements of the materia medica [2][3][4][5][6]. The use of animal species as remedies, although representing an important component of traditional medicines (sometimes in association with plant species), has been much less studied than medicinal plants [1]. However, the importance of nonbotanical remedies (those of animal and mineral origin) is emerging [7], resulting in a recent boom in publications focusing on zootherapy [8][9][10][11]. Brazil is well known for its rich social/cultural diversity, as represented by more than two hundred indigenous people and a range of local communities, which in turn have contributed to the high diversity of traditional knowledge and practices which include the use of medicinal animals. Indeed, animals have been used as a source of medicine in the country and have played a significant role in healing practices as many people have used animals as medicines or alternative or supplementary treatments [12,13]. Hence, Brazil can be considered a model to extensive zootherapeutic studies, since the use of animals and animalderived products is widespread among the country's human cultures, as predicted by the zootherapeutic universality hypothesis [14]. Furthermore, the concomitant use of wild animals for nutritional and medicinal purposes is also diffuse in several localities in the country, thus highlighting their important role as food medicine in well-established folk medical practices [15]. 2 Evidence-Based Complementary and Alternative Medicine worldwide [1]. As remarked by Perry [16], this is an expected trend, considering the frequent interactions between people and vertebrates-typically large-bodied animals, which may provide a wide range of medicinal products. This raises particular conservation concerns, as some of these taxa are overharvested for their medicinal uses and are now threatened [1]. In this article, we provide an assessment of the uses of wild edible vertebrate species in Brazilian Traditional Medicine. The study focused on the following questions: (1) which edible vertebrate taxa are mostly used in the Brazilian Traditional Medicine? (2) Do the conditions treated by medicinal resources vary with taxonomical group and/or animal's habitat? Methods Data used in this research resulted from an extensive analysis of the ethnozoological database provided by the Laboratório de Etnozoologia, Universidade Estadual da Paraíba. The database comprises information from ethnozoological studies on faunal medicinal use performed in all Brazilian regions. Additional data was gathered through information available in reviews published by the laboratory researchers [17][18][19]. Data analysis comprised information on species of edible vertebrates used as medicines, their family classification, habitats, conservation status, and conditions to which animals were prescribed. We only considered those taxa that could be identified to species level, and the scientific nomenclature of the taxa recorded (fishes, amphibians, reptiles, and mammals) and/or habitats were in accordance with the following databases: Fishbase (Froese and Pauly, 2016; http://www.fishbase.org/), Amphibian Species of the World (http://research.amnh.org/herpetology/amphibia/ index.php), The Reptile Database (http://www.reptiledatabase.org/), and Mammal Species of the World [20]. With regard to habitat analysis, marine and estuarine species were grouped in the same category (i.e., coastal and marine); if a marine species was also reported to freshwater environments, it habitat was categorized as costal and marine/freshwater. Moreover, continental species which could inhabit both terrestrial and aquatic systems were considered as semiaquatic species. The conservation status of the analysed species follows the International Union for the Conservation of Nature [21], the Convention on International Trade in Endangered Species of Wild Fauna and Flora [22], and the Brazilian red lists (decrees 444 and 445, Brazilian Ministry of Environment, 2014). Health conditions considered in this research were categorized by the International Statistical Classification of Diseases and Related Health Problems (ICD-10 Version: 2016; http://apps.who.int/classifications/icd10/browse/2016/en). Data Analysis. All data were verified for normal distribution (Shapiro-Wilk's test) and homogeneity of variance (Levene's test) and nonparametric tests were performed when those assumptions were not met. A Kruskal-Wallis test (followed by Dunn's post hoc test) and an ANOVA were performed to determine whether the number of health conditions treated per species varied among vertebrate taxonomic groups or habitat types, respectively. Resemblance between health conditions treated (grouped into ICD's categories) and taxonomic groups or animals' habitat types were assessed based on Jaccard's similarity index, where resulting matrices were used to perform cluster analyses. Due to low number of species recorded ( = 3), amphibians were excluded from all statistical analysis regarding taxonomic groups. Edible medicinal vertebrates were reportedly used to treat 165 health conditions/diseases (see Table 2). A single illness could be treated by various animal species (e.g., 67 animal species were used in the treatment of asthma and 60 in the treatment of rheumatism), and although most species (particularly fishes, mammals, and birds) were used to treat only one ( = 85; 41.7%) or up to five illnesses ( = 156; 76.5%), several were prescribed for treating multiple illnesses (>5 conditions; = 48, 23.5%), as shown in Figure 2. Reptiles were the most versatile group, as they were mostly used in the treatment of multiple conditions, with almost half of the species ( = 14) being used to treat more than 10 illnesses ( Figure 2). Indeed, from the 10 most expressive species in the treatment of multiple conditions (see Table 1), seven are reptiles, for instance, the "teju" and the boa snake (Salvator teguixin and Boa constrictor, resp.; = 28 health conditions prescribed, each), the Neotropical rattlesnake (Crotalus durissus; = 27 conditions), the green sea turtle (Chelonia mydas; = 25 conditions), and the common caiman (Caiman crocodilus; = 24 conditions). Moreover, Thrombosis, infection, swelling, asthma, amulet used as a protection against snake bite, injuries caused by spines of the "arraia," pain relief in injuries caused by snake bites Prescriptions of edible medicinal vertebrates were generalised in 20 disease categories, according to ICD-10. From those, "symptoms, signs, and abnormal clinical and laboratory findings" were the most recorded category in terms of therapeutic quotes recorded, followed by "infectious and parasitic diseases" and "injuries, poisoning, and other consequences of external causes" ( Table 2). With regard to the number of species associated with ICD-10 categories, most animals were prescribed for treating problems associated with the "musculoskeletal system and connective tissue" and the "respiratory system" (each: = 80 species; 39.2%), "injuries, poisoning, and other consequences of external causes" (67 species, 32.8%), and "symptoms, signs, and abnormal clinical and laboratory findings" (58 species, 28.4%) ( Table 2). Despite most medicinal vertebrates provide raw materials for remedies, medicinal products often have magicalreligious purposes, particularly for the prevention of diseases of spiritual cause (e.g., evil eye); they were also used as amulets to prevent diseases (e.g., amulet used as a protection against snake bite). It is worth noting that many animals involved in poisoning accidents, such as stingrays and snakes, Ascites; chest pain; cough; cracks in the sole of the feet; edema (also quoted as edema in the legs); fatigue; fever; headache; hoarseness; inflammation; jaundice; lack of appetite (also quoted as lack of appetite in children); numbness; pain (also quoted as pain in the body; pain in the breast; pain in the legs; to reduce pain); shortness of breath; swelling; assisting children who take longer than usual to start walking; vomit. 18 Injury, poisoning, and certain other consequences of external causes ( = 67 species) Bruises; burns (also quoted as burns in the skin); chilblains; injuries caused by bang; injuries caused by the animal itself; injuries caused by the spines of fishes (also quoted as injuries caused by the spines of rays); intoxication from poisonous animals; pain relief in injuries caused by the species' sting; pain relief in injuries caused by snake bites; pain relief in injuries caused by sting of insects; scratch; assisting in removing spines or other sharp structures from the skin (also quoted as to suck a splinter out of skin or flesh); wounds. 14 Diseases of the musculoskeletal system and connective tissue ( = 80 species) Arthritis; arthrosis; backache; bursitis; luxation; muscle strain; muscular pain; neck strain; osteoporosis; pain in joint; rheumatism; sprains; helping to strengthen bones. Disorders after parturition (to accelerate recovery after parturition); haemorrhage after delivery; nausea during pregnancy; pain in gestation; helping to accelerate parturition; helping to avoid swelling of the breast feeding; helping to induce abortion; helping to prevent abortion; wound in the breast caused by suckling. Healing of umbilical cord of newborn baby. are also used in folk medicine, particularly to treat injuries caused by themselves (see Table 1). Fishes and birds appear to have most similar use according to ICD-10 categories (Jaccard index: 94.4), as well as reptiles and mammals (Jaccard index: 90.0), resulting in two distinct clusters (Figure 4(a)). When considering resemblance between the disease categories recorded and animals' habitat types, two distinct clusters were also formed (terrestrial, freshwater, coastal, and marine; costal and marine/freshwater and semiaquatic) (Figure 4(b)), thus reflecting highest similarities between continental habitats (terrestrial and freshwater; Jaccard index: 90.0). With regard to species conservation status, 160 animals figure in at least one of the three red lists assessed (see Table 1). In the ICUN red list, 33 species (mainly fishes and mammals) are classified into threatened categories, mostly as vulnerable (VU; = 27) ones. Endangered (EN) and critically endangered (CR) species comprised six fishes and reptiles, namely, Narcine bancroftii and Pristis pectinata (CR) and Sphyrna lewini, S. mokarran, Chelonia mydas, and Eretmochelys imbricata (EN). In Brazilian red lists, most threatened animals are also considered VU ( = 22); EN species ( = 9) comprise mainly fishes and mammals; and CR ones ( = 8) comprise mainly fishes and marine reptiles. In CITES, 58 species are listed, especially in its Appendix II ( = 37), mammals and reptiles being the most expressive groups. Discussion The high number of vertebrates used as medicine is not surprising given the important role played by wildlife as a source of medicines in different traditional medicine systems [8,10,23,24]. The predominance of fishes and mammals in the Brazilian Traditional Medicine confirms our expectations, given that those groups comprise major targets in Brazil [25][26][27][28]. Although these two taxa have been primarily harvested for alimentary purposes, they generate a series of the inedible parts [such as bone, skin, tail, feather, liver, and bile ("fel")], rattle (from rattlesnakes), spine, scale, penis, carapace, beak, teeth, head, nails, and horn that can be used in popular medicines. According to Moura and Marques [29] the use of leftover/secondary products derived from the fauna seems to be one of the most conspicuous features on the Brazilian popular zootherapy. Zootherapeutic products, however, do not include inedible parts solely: flesh, eggs, and viscera are among some animal products used for both medicinal and alimentary purposes [1,12,13,30,31]. This corroborates the assumption that the consumption of wild vertebrates meat is often related to the purported medicinal or cultural benefits derived from the animal parts [32][33][34][35]. In a recent review study, Alves et al. [15] pointed out that at least 354 wild animal species are used in Brazilian Traditional Medicine, of which 157 are also used as food, evidencing that a close connection between eating and healing is common in Brazilian zootherapy. This is in line with several studies in ethnobiology and ethnopharmacology that have observed how difficult the clear separation between medicines and foods can be [36][37][38] and this situation includes plants and animals, essential items for the preparation of traditional medicine. Whether for food or medicinal purposes, the consumption of wild animals can lead to the transmission of various human diseases [39]. Van Vliet et al. [40] highlighted that the consumption of bushmeat for either purpose may lead to human infection by several zoonotic pathogens. Armadillos, for example, are widely used in folk medicine and are a natural reservoir of etiological agents of several zoonotic diseases that affect humans, such as leprosy, trichinosis, coccidioidomycosis or Valley Fever, Chagas disease, and typhus [41]. Therefore, it is essential that traditional drug therapies are submitted to an appropriate benefit/risk analysis [39]. It was found that several medicinal vertebrates used in the Brazilian Traditional Medicine have multiple therapeutic indications. The possibility of using various remedies for the same ailment is popular because it allows adapting to the availability of the animals. The fact that some medicinal animals are being used for the same purpose suggests that different species can share similar medicinal properties and might indicate the pharmacological effectiveness of those zootherapeutic remedies [8]. Multiple medicinal uses become even more evident when considering reptiles, as this group comprises one of the most important animal resources related to the medicine history [42] and is widely used in the most important traditional pharmacopeias worldwide [35]. Indeed, the use in traditional medicines is the human practice that involves the highest diversity of reptile species in Brazil [17], some of which play important roles in traditional medicines, such as the "teju" (Tupinambis teguixin) and the boa snake (Boa constrictor), which are one of the most used medicinal animals in Brazil [42,43]. Curiously, there is a general aversion to consuming some reptile groups, such as snakes and lizards, in the country. Nonetheless, this fact does not impair the use of these animals as medicines, as it is mainly associated with popular beliefs known as "simpatias," which, in most of the cases, state that "a person receiving a given treatment cannot know what that he/she is taking, otherwise the effect ceases" [18]. Hence, this fact seems to favour the high use of reptile species, despite widespread aversion to those animals. On the other hand, despite presenting the highest diversity of medicinal species, fishes were recommended to treat a comparatively low number of health conditions. This may be related to the fact that most parts of a fish are consumed as food; thus fewer products are left to be used in medicinal practices. Similarly, when considering major hunted taxa in Brazil, that is, mammals and birds [25,26,44], most species are also mostly consumed as food. However, the inedible parts generate "leftovers" (e.g., skin, tail, spine, scale, teeth, nails, and horn) which are among the main products used in traditional medicine. Indeed, according to Moura and Marques [29], the zootherapeutic use of the fauna is mainly based on derived leftovers/secondary products. Those authors also emphasise that, from the ecological theory point of view, the use of leftovers could be justified as an attempt to leverage the resources obtained from ecosystems which are inappropriate for alimentary consumption due to the mechanical difficulty of ingesting these parts, such as horns, feather, and scales. Therefore, one can expect that the diversity of leftovers provided by a species may support the potential to treat multiple diseases. Animals from continental habitats (i.e., terrestrial and freshwater) were found to treat similar disease categories; the same could be found within coastal and marine animals. This may be related to the local distribution of the diseases treated, thus leading people to use local resources in the traditional medicine of each region. For instance, in coastal areas, the occurrence of diseases classified into the category "external causes of morbidity and mortality" is very common, due to sting/poisoning accidents caused by fishes (e.g., stingrays, catfish, and toadfish), which are often treated by zootherapeutic products derived from the animals that caused the lesions [45][46][47][48]. Natural resources play an essential role in health care in traditional medical systems, as well as in bioprospecting for new drugs [49,50], and the interest in animal-based products has raised [49,51,52]. Hence, despite the available information on the chemical components and actions of some of these products, studies on fauna traditional uses still are potentially very important to shed light on several aspects of their therapeutic applications [53]. The comprehension of the multiplicity and trends in therapeutic uses of several vertebrate species is of particular interest from a conservation perspective, as threatened animals, such as those recorded in this and other studies [30] could be replaced by nonthreatened species with similar properties. However, it is important to highlight that the use of animals for both food and medicinal purposes may impose higher pressure on those species under overexploitation conditions. For instance, if the animal is solely sought for medicinal purposes, it can lead the hunter/fisher to use selective capture techniques or even release nontargeted species. On the other hand, if an animal is captured for feeding reasons and is not the main target of the hunting or fishery (e.g., due to size), it can be kept by the hunter/fisher due to some medicinal property. Hence, understanding such complex interactions and trends in the use of fauna for nutritional and medicinal purposes evidences the important role that ethnobiological and ethnopharmacological studies may play in crucial discussion on the trade-offs between animal harvesting and its sustainability towards better regulation of those practices. Conclusion Wild edible vertebrates, particularly those inhabiting aquatic environments, are used to treat a wide range of health conditions in Brazil, with reptiles consisting of the most versatile group in multiple disease prescriptions. Moreover, a trend in prescriptions was found according to animals' habitats, as disease categories were similar within continental and within coastal and marine habitats. Several consumed species are under threat, leading to a raise in conservation concerns, particularly due to the dual function (as food and medicines) those species present.
2018-04-03T04:08:11.427Z
2017-08-15T00:00:00.000
{ "year": 2017, "sha1": "d2be4e33fafb0f251585340f08a2fbd3b038c2c4", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/ecam/2017/4901329.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5bdf7342bf9126f82ab43e9c82db9fc3978d7204", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
31356643
pes2o/s2orc
v3-fos-license
Intelligence and Medial Temporal Lobe Function in Older Adults: A Functional MR Imaging–Based Investigation BACKGROUND AND PURPOSE: The influence of general intelligence and formal education on functional MR imaging (fMRI) activation has not been thoroughly studied in older adults. Although these factors could be controlled for through study design, this approach makes sample selection more difficult and reduces power. This study was undertaken to examine our hypothesis that intelligence and education would impact medial temporal lobe (MTL) fMRI responses to an episodic memory task in healthy elderly subjects. MATERIALS AND METHODS: Thirty-six women and 38 men, 50–83 years of age (mean, 63.4 ± 7.9 years), completed an auditory paired-associates paradigm in a 1.5T magnet. The amplitude and volume of fMRI activation for both the right and left MTLs and MTL subregions were correlated with the intelligence quotients (IQs) and educational levels by using Pearson correlation coefficient tests and regression analyses. RESULTS: The participants' mean estimated full scale IQ and verbal IQ scores were 110.4 ± 7.6 (range, 92–123) and 108.9 ± 8.7 (range, 88–123), respectively. The years of education showed a mean of 16.1 ± 3.2 years (range, 8–25 years). The paradigm produced significant activation in the MTL and subregions. However, the volume and amplitude of activation were unrelated to either IQ or years of schooling in men and/or women. CONCLUSIONS: We found no evidence of an effect of IQ or education on either the volume or amplitude of fMRI activation, suggesting that these factors do not necessarily need to be incorporated into study design or considered when evaluating other group relationships with fMRI. M ost functional brain studies examining the effects of general intelligence demonstrate an inverse correlation between intelligence (Spearman g) and activation in frontal lobe circuitry, which is thought to reflect executive control of attention, working memory, and response selection. [1][2][3][4] A similar correlation with nonfrontal brain areas, including medial temporal lobe (MTL) structures, has also been reported. 5 Structural imaging studies have generally reported modest correlations between intelligence and total brain volume but have been less successful at correlating intelligence measures with regional brain volumes (eg, Flashman et al 6 ). There is some evidence that both frontal and nonfrontal areas show such a correlation. For example, Haier et al 7 used optimized voxel-based morphometry to investigate structural correlates of intelligence on a voxel-by-voxel basis throughout the entire brain and found that intelligence quotient (IQ) positively correlated with gray matter volume in frontal, temporal, parietal, and occipital areas, suggesting that there is a nonspecific or distributed neural basis of intelligence that extends beyond frontal circuitry. Even though the MTL is not thought of as a locus of intelligence per se, this brain region is integral to declarative memory function, which requires the ability to learn and remember, 8 capacities fundamental to the manner in which intelligence is generally assessed. Studies of intelligence and memory performance uniformly report a direct relationship between the two. [9][10][11] In addition, years of formal education, which is often used as a proxy for intelligence, correlates with performance on cognitive screening instruments and test batteries, [12][13][14][15] though it appears that years of education may be less related to memory performance than general intellectual functioning. 10 Furthermore, the MTL is a common location of the brain investigated in patients with neurodegenerative disorders including Alzheimer disease. One recent functional MR imaging (fMRI) memory study reported an inverse relationship between years of education and temporal lobe activation among the elderly, 16 providing support for the cognitive reserve hypothesis that attempts to explain the seemingly protective effect of higher education against neurodegenerative disorders such as Alzheimer disease. 17 It is important to understand whether functional changes observed in response to memory activation paradigms are a reflection of individual characteristics, in this case the intellect or education of the participant. As is already appreciated, there are a number of factors that influence the signal intensity observed during the performance of fMRI paradigms designed to explore cognitive function. Some of these factors are taskspecific and integral to the experimental design, reflecting both the intensity and complexity of the activation paradigm. However, others, such as age, sex, and handedness, are subject-specific and, therefore, are potential sources of bias, which can affect both the validity and interpretation of the blood oxygen level-dependent (BOLD) signal-intensity change. Studies have demonstrated, for example, the influence of age and sex on fMRI activation patterns during a variety of cognitive activation tasks [18][19][20][21][22][23] and have noted structural neuroanatomic differences as well. 17,24,25 These subject-specific factors are generally controlled by matching samples or statistically adjusting for differences in subsequent analyses, practices that can pose problems for sample recruitment and can reduce statistical power if done unnecessarily. This analysis was, therefore, undertaken to examine directly the impact of general cognitive function as measured by IQ and educational achievement on MTL activation patterns in response to an episodic memory task in a sample of healthy controls older than 50 years of age. We sought to determine whether IQs and levels of education are correlated with activation in MTL structures during memory encoding and retrieval. According to the neural efficiency hypothesis, in which less activation signifies a more efficient brain operation, we hypothesized that as IQ and number of years of schooling increase, MTL activation would decrease 3 -that is, individuals with greater cognitive ability would recruit fewer neural resources to complete the task successfully than those for whom the task may be more of a cognitive strain. This study was undertaken to examine our hypothesis that intelligence and education would impact MTL fMRI responses to an episodic memory task in healthy older adults. Materials and Methods Participants. The 74 right-handed individuals included in this analysis currently serve as healthy controls for an ongoing study of brain function and cognition. 26 They range from 50 to 83 years of age (mean, 63.4 Ϯ 7.9 years) and include 36 women and 38 men. As part of an extensive evaluation, the years of education and the performance on the North American Adult Reading Test (NAART) were recorded. The NAART scores were converted to both verbal and fullscale IQs. The IQs, years of education, and ages of the subjects were used as variables for subsequent statistical analysis. The study was approved by the institutional review board, and all participants provided informed consent. MR Imaging Evaluations. Each individual underwent an MR imaging evaluation, which included coronal magnetization-prepared rapid acquisition of gradient echo (MPRAGE) T1-weighted scanning for total brain and temporal lobe volumetry, a screening T2-weighted scanning to assess any mass lesions or extensive brain injury (TR, 4000 ms; TE, 120 ms; 5 mm with 1-mm intersection gap; 23-cm FOV, and 256 ϫ 512 matrix), and a T1-weighted scanning in the coronal plane corresponding to the sections for which the subsequent fMRI paradigm was performed (TR, 600 ms; TE, 7 ms; 23-cm FOV; and 256 ϫ 256 matrix with a 4.5-mm thickness and 0.5-mm intersection gap). Incidental findings that were noted for the subjects on the screening portion of the MR imaging study included minimal (n ϭ 25) and moderate (n ϭ 5) small vessel ischemic changes (graded by Cardiovascular Health Study criteria), lacunar infarctions of the basal ganglia (n ϭ 7), sinusitis (n ϭ 15), and mastoiditis (n ϭ 2). One patient each had a noncompressive subdural hygroma, posterior fossa arachnoid cyst, and asymptomatic focal occipital lobe infarction. fMRI Paradigm. Participants were presented with an auditory word-pair-associates learning task, which consisted of two 6-minute 10-second sessions, each with 6 trials. Each trial included an encoding phase, in which 7 unrelated word-pairs (eg, "food" and "book") were presented through MR imagingϪcompatible headphones, and a cued recall phase, in which the first word from the pair was presented and the participant was instructed to recall silently the second word of the pair. Both encoding and recall were preceded by rest (baseline) periods (see Fig 1 for paradigm diagram). At the end of each session, participants were asked to recall the word pairs. 26 fMRI Scanning, Data Processing, and Analysis. Functional scans were acquired on a 1.5T Intera NT scanner (Philips Medical Systems, Best, the Netherlands) at the F.M. Kirby Functional Imaging Research Center (Kennedy Krieger Institute, Baltimore, Md). The system is equipped with galaxy gradients (66 mT/m at 110 mT/m/s). A standard head coil was used to limit head motion. A sagittal localizer scan was obtained to pinpoint the exact location of the brain. Two functional scans were acquired using echo-planar imaging (EPI) and a BOLD technique with TR ϭ 1000 ms, TE ϭ 39 ms, flip angle (FA) ϭ 90°, FOV ϭ 230 mm in the xy plane, and matrix size ϭ 64 ϫ 64. Eighteen coronal sections were acquired with a 4.5-mm thickness and an intersection gap of 0.5 mm, oriented perpendicular to the anteroposterior commissure line. Sections were acquired sequentially along the z-axis, yielding a total coverage of 90 mm centered on the temporal lobe. Functional scanning was performed in 2 sessions, each with 370 time points. Total functional acquisition time was 12 minutes 20 seconds. A high-resolution whole-brain scan was obtained by using a T1-weighted 3D MPRAGE sequence with the following parameters: TR ϭ 8.6 ms, TE ϭ 3.9 ms, FOV ϭ 240 mm, FA ϭ 80°, matrix size ϭ 256 ϫ 256, section thickness ϭ 1.5 mm, 124 sections. Functional data preprocessing was conducted on Windows XP workstations, by using Statistical Parametric Mapping (SPM99; Wellcome Department of Imaging Neuroscience, University College, London, UK) running under the Matlab 6.1 (MathWorks, Sherborn, Mass) programming and runtime environment. Rigid-body registration (motion correction) was performed by realigning all the scans from both sessions to the mean image of all the functionals in both sessions. This was conducted by using a 6-parameter affine transformation (3 translations and 3 rotations in x-, x-, and z-axes), followed by reslicing using a "windowed" sinc interpolation. Twelve-parameter affine transformation and nonlinear normalization using 7 ϫ 8 ϫ 7 basis functions were used to warp each individual's data into standard stereotaxic space (standard atlas). Template space was defined by the EPI template of the Montreal Neurologic Institute (MNI; McGill University, Montreal, Ontario, Canada) included with SPM. The template was manually cut to fit each individual scan to improve the quality of normalization. Normalized scans were resliced to isotropic voxels (2 mm 3 ), using trilinear interpolation, and spatially smoothed with a full width at half maximum gaussian kernel of 5 mm 3 . Individual time series analysis was conducted using the general linear model within the framework of SPM99. Data were modeled as epochs (blocks) and convolved with the canonical hemodynamic response function of SPM to account for the lag between stimulation and the BOLD signal intensity. The model was estimated by using the implementation of ordinary least squares of SPM. The contrasts of interest subtracted activation during the "rest" condition from the "encoding" and "recall" conditions. Hand-drawn segmentations of right and left medial temporal subregions (hippocampus, parahippocampal cortex, entorhinal cortex, and amygdala) by an expert rater on the single-subject T1-weighted MNI template were used as region-of-interest masks in this analysis. All of these subregions of the MTL were hand-drawn by an experienced neuroanatomist who has been encircling these subregions for over 10 years. For data on the reliability and validity of the manual segmentation method see Honeycutt et al. 27 We calculated the amplitude and volume of activation for both the right and left MTLs and subregions for each subject's statistical image. These region-specific summaries were correlated with verbal and full-scale IQs and education values by using Pearson correlation coefficient tests. We also performed multiple regression analyses, adjusting for the age and sex of the subjects. Group maps of activation for memory encoding and recall minus baseline demonstrated MTL activation for the 74 subjects (Fig 2). Areas of significant activation included the left parahippocampal gyrus and the hippocampus bilaterally (P Ͻ .05, corrected for total MTL volume). Spearman correlations between the left and right medial temporal lobes, subject-level regional summaries, full-scale IQ, verbal IQ, and education level, respectively, were calculated and are presented in Table 1. Results are given for both the encode and recall portions of the paradigm task compared with baseline. Volume of activation represented the extent of voxels within the region of interest with a t-contrast value above a 3.1 statistical threshold (the volume that surpassed the statistical threshold of P Ͻ .05 corrected). The "mean" was the mean of the subject-specific contrast estimates within the region of interest, whereas the upper quartile was the 75th percentile of the subject-specific contrast estimates. No significant correlations were found between any of our MR imaging measures (volumes and amplitudes of activity) and full-scale or verbal IQ or educational levels in any of the MTL regions and certainly none when accounting for multiplicity. These same results (absence of correlation) were replicated for the parahippocampal gyrus, entorhinal cortex, hippocampus, and amygdala when these subregions were separately analyzed. When the amplitudes of activation were considered as well, there also were no correlations seen. The same analysis was repeated after stratifying by subject sex (Tables 2 and 3). Once again for the overall MTLs and for the subregions of the right and left temporal lobes, there were no statistically significant correlations found between volumes and amplitudes of activation and IQ or education for men or women. Adjusting for age did not influence these findings either. Further analyses stratifying education into a dichotomy of college-educated or non-college-educated subjects and directly comparing genders also showed no relationship to the degree of fMRI activation in any region. Lack of a significant correlation was observed during both encoding and recall components of the task and on a combined contrast adding both components. Discussion This large fMRI study demonstrates that the intelligence and education levels of these older healthy research study participants do not influence the volume or amplitude of MTL activation seen in an auditory episodic memory experiment. The paradigm produced significant increases in activation in the MTL, including the left parahippocampal gyrus and bilateral hippocampus. However, these increases were unrelated to either IQ or years of schooling. This lack of association is apparent for both periods of memory encoding and periods of memory recall. Studies have reported associations between levels of education and intelligence and brain atrophy, as well as performance on neuropsychological testing, 17,28,29 which may be expected to reflect underlying neural functioning. However, this relationship is not as apparent in the context of functional hemodynamic blood flow, with a number of studies reporting functional alteration without structural change, possibly indicating that functional changes precede neuronal loss. One such study 26 found functional differences between individuals at familial risk for Alzheimer disease and matched controls in the absence of any memory performance differences or structural differences in MTL regions. There is evidence for functional differences related to IQ and education in regions other than the MTL on some imaging studies. Scarmeas et al 30 found, for example, that a composite factor incorporating both education and IQ was associated with greater positron-emission tomography (PET) activity in the left cuneus for older subjects and in the right inferior temporal gyrus, right postcentral gyrus, and cingulate in younger subjects during a visual memory task. Springer et al 16 recently noted a contrast between the young and the old with respect to the relationship of activation maps and level of education. Using an fMRI memory task, they found that among elderly participants, higher education was associated with increased frontal activity, whereas lower education was associated with increased MTL activity. The authors suggest that education may play a role in an underlying ability to shift neural resources. Finally, a recent PET study investigating the impact of cognitive reserve in healthy elderly subjects, as quantified by a combination of education and IQ, found that subjects with higher cognitive reserves demonstrated a dynamic shift in frontal and medial temporal networks in response to a spatial recognition memory task as task difficulty increased. 31 This suggests that intelligence and education may have an indirect effect on MTL function, in the context of increasing task demand. Although this study of relatively intelligent and well-educated older individuals provided robust results, it is possible that these findings will not generalize to younger or less-welleducated samples. On average, the participants had completed college and registered an IQ (mean, 108.9 -110.4) at the higher end of the normal range of intelligence, and all were at least middle-aged. In addition, the results generated by this specific paired-associates memory task may not generalize to findings from other fMRI memory paradigms. On the other hand, the strength of this study lies in the large sample size tested. Although the average intelligence and educational level in this sample may be high, the data may be apropos to the subject population typically enrolled in most fMRI research labs recruiting from their local university populations. This study does not support the cognitive reserve hypothesis, which would have predicted that there would be an inverse correlation of IQ and education with functional activation such that increased intelligence and years of schooling would result in lower levels of BOLD activation in the MTL in response to this task. However, it is possible that the paradigm used here was not sufficiently challenging to elicit differential performance, though analysis of the free-recall data following scanning produced only about 83.7% memory retention rate, suggesting that the subjects were being "challenged" by the paradigm. There is evidence from functional studies of motor output, attention, and spatial-recognition memory that increased task difficulty results in altered neural activation. [31][32][33] There are, however, inherent problems in modulating task difficulty. For example, in initial pilot work for this task, increasing the unfamiliarity and length of the words used for the word pairs or increasing the speed of delivery resulted in less activation because individuals reported they were frustrated because they could not accomplish the task and essentially stopped trying. Conclusions Our results suggest that the level of education and verbal and full-scale IQs have little influence on encoding and recall MTL activation in response to an auditory episodic memory task in healthy older adults. This conclusion impacts the potential need to control for these demographic and neuropsychological variables for the analysis of the described memory-based fMRI task. This finding is somewhat at odds with conventional wisdom that considers education and IQ as primary confounders between cognitive or health predictors and fMRI activation. Our results suggest that this conventional wisdom should be tempered in some settings, especially because unnecessary control for potential confounders can inflate variances and lead to type II errors. Whether these findings can be generalized to a wider population and to other stimulus paradigms needs to be examined. For example, other study populations may have a wider range of education levels and intelligence quotients. Further study needs to be undertaken because these questions have important ramifications for subject inclusion criteria, statistical power calculations, and analysis of fMRI research studies.
2017-07-06T14:25:23.115Z
2009-09-01T00:00:00.000
{ "year": 2009, "sha1": "cb7f33573b576429a29d63aadc9e4f93f8417c30", "oa_license": "CCBY", "oa_url": "http://www.ajnr.org/content/30/8/1477.full.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "cb7f33573b576429a29d63aadc9e4f93f8417c30", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
238411178
pes2o/s2orc
v3-fos-license
Characterization of the Genomic and Immunologic Diversity of Malignant Brain Tumors through Multisector Analysis Multisector analysis of malignant brain tumors highlights substantial differences in immunogenomic landscapes, with gliomas harboring greater spatial heterogeneity at a genomic, neoantigen, and T-cell repertoire level. INTRODUCTION Malignant brain tumors consist of both primary tumors arising from within the central nervous system (CNS) and secondary metastases originating from extracranial sites. The most common malignant primary tumor of the CNS is glioblastoma (GBM), whereas secondary tumors typically develop from carcinomas of the lung, breast, or kidney or from melanoma (1,2). Although historically both primary and metastatic malignancies carried poor prognoses, the use of checkpoint blockade immunotherapy has led to improved outcomes and robust intratumoral T-cell infiltration in a con-siderable subset of patients with brain metastases (BrMET; refs. 3,4). However, these treatments and other immunotherapeutic approaches have not been effective in GBM. Indeed, anti-PD-1 monotherapy did not improve survival in patients with newly diagnosed or recurrent disease, and targeted vaccines and cell therapy approaches also have shown limited efficacy (5)(6)(7)(8)(9). Thus far, immunotherapy responses within GBM have been restricted to a selective group of patients with a hypermutated phenotype caused by germline deficiencies in DNA replication or repair (10,11). One cardinal feature of GBM that may be a particularly important contributor to therapy resistance is its extensive intratumoral molecular and cellular heterogeneity (12). Indeed, intratumoral genetic heterogeneity is found in many cancer types to varying degrees (13)(14)(15)(16). The presence of a complex tumor subclonal genomic architecture likely plays a pivotal role in limiting the efficacy of both targeted therapies as well as immunotherapies. Specifically, studies in non-small cell lung cancer (NSCLC) and melanoma showed a strong association between checkpoint blockade immunotherapy response and the frequency of clonal nonsynonymous mutations, which likely serve as sources of spatially distributed neoantigen targets (17,18). In GBM, extensive work has demonstrated that intratumoral heterogeneity of a range of tumor somatic changes, including mutations, copy-number alterations (CNA), and transcriptional signatures across spatially distinct tumor regions, is also a hallmark of this disease (19)(20)(21). In contrast, we have a more limited understanding of the extent of intratumoral heterogeneity in other intracranial malignancies such as BrMETs, as few corresponding analyses have been performed in these cancers (22). Moreover, further work is needed to understand the relationship between tumor genetic heterogeneity and other important features of the tumor ecosystem, including the immune microenvironment. Although numerous recent studies in other solid tumors, including NSCLC and ovarian cancer, have examined the relationship between tumor cell-intrinsic properties and immunologic parameters such as the T-cell receptor (TCR) repertoire (23)(24)(25)(26), this analysis has not been extended to either primary or metastatic malignant brain tumors, in which the extent of heterogeneity in the immune microenvironment and its interplay with genomic and transcriptional diversity is unknown. In the setting of GBM, it is also unclear how the tumor genomic and immunologic landscapes evolve in the hypermutated state seen in close to 20% of recurrent disease (27). To address these questions, we performed systematic and comprehensive multisector immunogenomic analyses on 93 samples from a cohort of 30 patients with primary or recurrent gliomas or metastatic brain tumors, representing the largest cohort of these brain cancers studied spatially to date. For each patient, we characterized multiple spatially distinct regions using whole-exome sequencing (WES), custom capture validation, RNA sequencing, and TCR sequencing. Our findings underscore the significant differences in clonal architecture between gliomas and metastatic brain tumors, which translates into distinct neoantigen landscapes and, in turn, tumor-infiltrating T-cell clonotypic diversity. These data therefore provide high-resolution insights into the immunogenomic landscapes within malignant brain tumors, which may inform tumor-specific therapeutic approaches. Genomic Features of Glioma and BrMET Cohorts We obtained surgically resected tumor tissue and matched peripheral blood from a group of 30 patients with pathologically confirmed intracranial tumors (Supplementary Fig. S1A and S1B). Within this cohort, 14 tumors were primary GBM, 4 were recurrent GBM, 1 was an anaplastic oligodendroglioma, and 11 were BrMETs from lung, breast, or cutaneous malignancies. In total, 21 of these patients (15 primary gliomas and 6 BrMETs) were newly diagnosed and had received no prior therapy. The 4 patients with recurrent GBM had undergone standard-of-care chemoradiation therapy, whereas 5 of the patients with BrMET had received varying treatment regimens for their primary tumor ( Supplementary Fig. S1B). Importantly, no patients had prior immunotherapy treatment, but all patients across the cohort were given preoperative steroids. Immediately following surgical resection via craniotomy, each sample was dissociated into multiple (2)(3)(4) spatially distinct tumor regions that each underwent comprehensive genomic and immunologic profiling including DNA WES, RNA sequencing, neoantigen prediction, and TCR sequencing (Fig. 1A). Using WES at an average coverage depth of 156×, we identified a total of 11,923 somatic variants (both single-nucleotide variants and indels) across the tumor cohort. Because of the subclonal nature of many variant calls derived from tumors with significant intratumoral heterogeneity, we developed a customized, targeted validation sequencing assay with a set of probes targeting all initially identified variants in addition to a select group of noncoding sites (e.g., TERT promoter mutation sites) to resequence all initially detected variants to confirm their presence. Using this approach, 87.6% of the cohort was characterized at a depth of at least 250× at >90% of positions captured by the custom reagent. This resulted in confirmation of 92% (10,254/11,181) of the original variants through validation sequencing after removal of those that could not be targeted (488 variants) or lacked sufficient minimum coverage (244 variants; Supplementary Table S1). These data allowed us to obtain high-precision estimates of the variant allele frequency (VAF) of each variant and provided greater confidence that variants were not missed owing to potential regional variability in neoplastic content or sequencing depth. As expected, the median number of aggregate somatic variants per tumor was higher in BrMETs (504 variants) than in either primary (93 variants) or recurrent glioma (141 variants; Fig. 1B). Within the glioma cohort, GBM065.Re represented an outlier with 5,750 somatic mutations identified across four distinct tumor regions. Most of the variants within this sample displayed the characteristic mutational signature associated with prior temozolomide treatment (Supplementary Fig. S2A and S2B), suggesting a treatment-induced hypermutated phenotype (28,29). Within the BrMET specimens, recurrent mutations were identified in TP53 across all histologies (7/11 overall; 3/5 NSCLC, 2/4 breast carcinoma; Fig. 1C). KRAS alterations were identified within two of five NSCLC tumors, and PIK3CA mutations were present in two of four breast carcinomas consistent with their high incidence in studies of primary samples (30,31). In addition, TERT promoter mutations were observed in both a melanoma and an NSCLC BrMET tumor. Across the GBM samples, we identified recurrent mutations in canonical GBM-associated genes such as the TERT promoter (14/18), TP53 (7/18), PTEN (4/18), NF1 (3/18), and EGFR (3/18; Fig. 1D). This high frequency of TERT promoter mutations mirrors earlier work estimating frequencies upward of 70% to 80% among GBM (32,33). In addition, the frequency of TP53 alterations is consistent with prior studies observing mutations in 30% to 40% of GBM samples, whereas the frequency of EGFR mutations is slightly lower (21,33). We next assessed the prevalence of CNAs within our cohort and identified classical GBM-associated changes such as frequent chromosome 7 amplification encompassing EGFR (17/18), chromosome 9 deletions of the CDKN2A locus (15/18), and chromosome 10 deletions of the PTEN locus (17/18; Fig. 1E; Supplementary Table S2). These amplifications and deletions have been detected at comparably high frequencies across several other studies (21,33). The metastatic tumors displayed alterations similar to those previously defined for their corresponding primary tumor such as amplifications of KRAS (3/5) in NSCLC and ERBB2 (2/4) in breast carcinomas ( Fig. 1F and G). Dashed lines indicate significantly recurrent amplifications (red) and deletions (blue) at an FDR <0.1. F, NSCLC brain metastasis cohort GISTIC output. G, Breast cancer brain metastasis cohort GISTIC output. SCLC, small cell lung cancer. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 −1 0 1 2 G−score Glioma 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 0 1 G−score BrMET NSCLC 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 clonal private variants in brain metastases and gliomas. Significance determined by unpaired t test. *, P < 0.05; ***, P < 0.001. C, Proportion of total identified variants that would have been captured through the sequencing of a random single site from within each tumor. Significance determined by unpaired t test. *, P < 0.05; **, P < 0.01. D, Total variants identified per tumor if one, two, or three samples were pooled for analysis. Significance determined by unpaired t test. *, P < 0.05. NS, not significant. E, CNV clonality per tumor in the gliomas (left), NSCLC brain metastases (middle), and breast cancer brain metastases (right) cohorts. Variant count all harbored a high fraction of clonal variants. In contrast, gliomas (primary and recurrent samples pooled) contained a higher fraction of both subclonal shared (P < 0.05; unpaired t test) and subclonal private (P < 0.05; unpaired t test) variants than did the BrMETs (median subclonal private variant fraction 0.28 vs. 0.09; Fig. 2B). Overall, approximately 43% of mutations within the glioma cohort were categorized as clonal. Although this is slightly lower than two previous studies have reported, almost all of those tumors had only two spatially separate regions analyzed, in contrast to the three or four sectors sequenced in our study (21,34). Focusing specifically on cancer driver genes, mutations in TP53, PIK3CA, and KRAS were clonal in all cases among BrMET samples. In contrast to the generalized heterogeneity of gliomas, most TERT promoter (13/14 clonal), TP53 (6/7), and EGFR (3/3) mutations were clonally distributed. However, most PTEN (1/4 clonal) and NF1 (0/3) alterations were not clonal within gliomas. These findings are largely consistent with a prior multisector sequencing study that noted significant clonality of TERT promoter and TP53 mutations (21). However, this same group did note that most identified EGFR mutations were subclonal private in contrast to our cohort (21). To explore the translational implications of this heterogeneity, we calculated the fraction of total tumor variants that would Cancer Research. on December 27, 2021. © 2021 American Association for cancerdiscovery.aacrjournals.org Downloaded from have been identified from sequencing a single glioma or BrMET tumor site, which simulates the information that would be obtained from a single-site biopsy at surgery. We observed that a higher fraction (P < 0.01; unpaired t test) of the total variants was identified within BrMETs (median = 0.92) compared with primary gliomas (median = 0.73) when sampling a single site (Fig. 2C). Given the limitations of single-site sampling to capture tumor-wide variant information in gliomas, we determined the extent to which multiregion sequencing could lead to the identification of additional tumor variants within our cohort. Among the BrMETs, sampling additional tumor regions did not identify significantly more variants. However, among glioma samples, sequencing three regions instead of one raised the median number of identified variants from 61 to 98 (Fig. 2D). Thus, multiregion sequencing in gliomas captures a more complete picture of the genomic landscape but provides only limited improvement in the characterization of BrMETs due to their increased comparative spatial genomic homogeneity. Having characterized glioma and BrMET genomic architecture at the variant level, we next sought to characterize the intratumoral heterogeneity of CNAs to determine whether there is evidence for spatial architecture that is similar to that of the variants. Strikingly, the landscape of CNAs within gliomas was markedly more spatially heterogeneous than the pattern observed within BrMETs ( Supplementary Fig. S3). When we classified each CNA event as clonal, subclonal shared, or subclonal private, we identified a significant fraction of clonal CNAs within BrMETs in contrast to the predominantly subclonal CNAs within gliomas (Fig. 2E). Thus, at both the variant and CNA levels, gliomas are significantly more spatially heterogeneous than BrMETs. Finally, to characterize these samples at the transcriptional level, we made use of a previously defined gene expression-based molecular classification of GBM into proneural, neural, classical, and mesenchymal subtypes (35). We identified a distribution of transcriptional subtypes with 12 neural, 12 classical, 16 mesenchymal, and 6 proneural subtypes across the 46 primary or recurrent GBM tumor sectors. Intriguingly, in a majority (9/16) of the tumors with multiple regions analyzed, we observed intratumoral heterogeneity of these transcriptional subgroups ( Supplementary Fig. S4), in line with prior reports (20). Intratumoral Heterogeneity of Tumor Antigens in Gliomas and BrMETs Numerous clinical trials developing personalized neoantigen vaccines for GBM and other brain cancers are ongoing (8,9). To determine the consequences of tumor genetic heterogeneity on immunologic features of each tumor, we applied the pVacSeq neoantigen prediction pipeline (36,37) to define the neoantigen landscape across all glioma and BrMET samples. When we aggregated the total number of predicted neoantigens for each tumor from all sampled regions, BrMETs harbored a higher number of HLA class I neoantigens per tumor (median = 186) than either primary (median = 39) or recurrent gliomas (median = 51; Supplementary Table S3). Moreover, BrMET class I neoantigens were significantly (P < 0.001; unpaired t test) more clonal than those identified in gliomas ( Fig. 3A and B). Of note, all EGFR mutations did yield predicted clonal class I neoantigens. However, despite all carrying the IDH1 R132H mutation, only one of the four IDH-mutant patients within the cohort had a predicted class I neoantigen from this variant, displaying the HLA haplotype dependence of these results. BrMETs also exhibited a greater number of HLA class II neoantigens (300 antigens) than primary (60 antigens) or recurrent (81 antigens) gliomas, and these neoantigens were significantly (P < 0.001; unpaired t test) more clonal in their tumor distribution (Supplementary Fig. S5A and S5B; Supplementary Table S4). For most patients, the spatial distribution of both class I and class II neoantigens closely mirrored the underlying variant distribution (Supplementary Fig. S6A-S6C). Finally, whereas BrMET HLA class I neoantigens are mostly captured by single-site tissue sampling, additional glioma neoantigens continue to be identified as additional tissue sites are sampled (Fig. 3C). Taken together, these data demonstrate that neoantigens are distributed heterogeneously in gliomas compared with BrMETs. In addition to neoantigens, cancer/testis (CT) antigens represent another group of tumor-specific antigens that can be recognized by the immune system. These antigens have highly restricted expression in normal tissue, can be expressed in reproductive cells, and are often upregulated in malignancies. Although CT antigens are targeted in a range of clinical trial efforts (38)(39)(40)(41), their expression and distribution in brain cancers has not been described previously. We therefore sought to characterize CT antigen expression and spatial distribution within our tumor cohort. Because CT antigens are wild-type proteins, normal tissue expression affects both the degree of anticipated off-target effects from directed immunotherapeutic efforts as well as the extent of immunologic tolerance during development. Thus, we scored each candidate antigen from a curated list of CT antigens based on its log-transformed expression relative to normal brain tissue. Among gliomas, some of the highest scoring CT antigens were BIRC5 (Survivin), PMEL (gp100), and IL13RA2 (Fig. 3D). Although BIRC5 was also the highest-scoring antigen among the BrMET samples, it displayed relatively higher expression of many prominent CT antigens such as the MAGE family proteins, gp100, MART1, and HER2/neu compared with gliomas. One notable exception to this was IL13RA2, which was the lowest-scoring antigen among BrMETs but ranked third among gliomas. These findings were consistent regardless of whether brain or matched primary site tissue was used to generate BrMET CT antigen scores ( Supplementary Fig. S7A). In contrast to the marked heterogeneity of tumor neoantigens, we observed no significant difference in the spatial distribution of CT antigen scores overall between gliomas and BrMETs as assessed by a cosine similarity index (Supplementary Fig. S7B). However, these characteristics do vary between CT antigens, with TERT, CTAG1B (NY-ESO-1), and the MAGE family trending to higher intratumoral variance (greater heterogeneity of expression) than the more clonal BIRC5 and gp100 (Fig. 3E). Spatial Resolution of Immune Landscapes in Gliomas and BrMETs We next sought to define the immune cell infiltration within each tumor and to describe the extent of intratumoral heterogeneity within the local immune microenvironments. We adopted previously published immune deconvolution methods that resolve immune cell populations from shared, and subclonal class I neoantigens in brain metastases and gliomas. Significance determined by unpaired t test. **, P < 0.01; ***, P < 0.001. C, Impact of multiregion sequencing on total class I neoantigen load. Significance determined by unpaired t test. **, P < 0.01. D, Heat map of CT antigen scores for each sample calculated by normalizing tumor expression to normal "Brain-Cortex" expression (see Methods). E, Plot of the average intratumoral variation in CT scores between regions of the same tumor by the average cancer/testis antigen score for each gene among all brain metastases or gliomas. Table S5). We calculated Danaher immune scores for all regions and observed substantial intertumoral variation specifically among "CD8 T-cell" and "cytotoxic cell" scores (Fig. 4A). Surprisingly, we detected no significant difference in the aggregate immune scores between regions from glioma and BrMET samples. To probe potential differences in more detail, we next performed differential gene expression analysis on tumor regions from gliomas and BrMETs. We detected significantly higher levels of CD274 (PD-L1) and significantly lower levels of CXCL9 in gliomas compared with BrMETs (q < 0.01; t test Danaher and colleagues (42). Each column represents a tumor region and each row represents an immune population. Scores represent the average of the log-transformed expression of a collection of subset-specific genes. B, Difference in log-transformed expression of PD-L1 (left) and CXCL9 (right) between tumor types for all samples within cohort. Significance determined by t test with Benjamini-Hochberg multiple test correction. **, q < 0.01. C, Difference in macrophage polarization (left) or ontogeny (right) based on previously published gene sets (see Methods) between tumor types for all samples within the cohort. Significance determined by two-sided t test. *, P < 0.05; **, P < 0.01. D, Danaher scores for each sector from a tumor with three or more samples plotted in PC1-PC2 space following principal components analysis. DC, dendritic cell; NK, natural killer; Treg, regulatory T cell. Fig. 4B). In addition, because there was a robust infiltration of macrophages in both gliomas and BrMETs as determined by CIBERSORT ( Supplementary Fig. S8A), we explored two previously published gene sets (45,46) to further characterize this population. Although we observed a slight polarization toward the immunosuppressive M2 phenotype characterized by higher expression of STAT3 and MRC1 (CD206) within gliomas, we identified a major distinction within macrophage ontogeny. Specifically, the BrMETs had a significant skewing toward higher expression of genes associated with monocytederived macrophages relative to gliomas, which were enriched for microglial-specific genes (Fig. 4C). This is consistent with recent work in the field using single-cell analyses (47,48). Finally, we assessed the degree of immunologic intratumoral heterogeneity as estimated by the Danaher scores. We performed principal components analysis (PCA) on the Danaher immune scores for all tumors with RNA from at least three sectors. Plotting each region in PCA space, we observed that most regions from the same tumor clustered together (Fig. 4D). To quantify the extent of heterogeneity, we calculated the area in PC1-PC2 space (termed Danaher intratumoral heterogeneity) of the triangle with vertices corresponding to each region of a tumor. In contrast to the substantial differences in variant and neoantigen heterogeneity, we detected no difference in the degree of intratumoral immune cell heterogeneity between the tumor types ( Supplementary Fig. S8B). A similar result was obtained by calculating pairwise cosine similarity for Danaher immune scores between regions from the same tumor ( Supplementary Fig. S8B). Overall, these data suggest that the intratumoral spatial variation in the immune microenvironment is similar between gliomas and BrMETs and generally is less than the intertumoral variation. TCR Clonotypic Diversity and Heterogeneity To evaluate the diversity, heterogeneity, and degree of clonal expansion of TCR clonotypes within the infiltrating T-cell populations of GBM and BrMET tumor samples, we performed TCR sequencing on 65 regions from 22 tumors in the cohort. The TCR β-chain complementarity-determining region 3 (CDR3) is highly diverse and plays a significant role in antigen recognition. Therefore, the TCR β-chain CDR3 sequences can function as unique barcodes of individual T-cell clones as they are activated and undergo clonal expansion within the tumor. We classified clonotypes within each tumor region as either the dominant clone (clone 1) or in predetermined clonotype groups based on frequency (i.e., clones 2-5, 6-20, 21-100, 101-1,000, or more than 1,000 clones; Fig. 5A; Supplementary Fig. S9A). Unexpectedly, we observed substantial clonal expansion within GBM. For example, each region of GBM055 harbored a dominant clone comprising >17% of the T-cell repertoire. In addition, regions from GBM047.Re, GBM056, GBM059, GBM065.Re, and GBM079 all contained dominant clones present at frequencies >12%. In contrast, of the sequenced BrMETs, only one region from BrMET025 (NSCLC) harbored a dominant T-cell clone present at a frequency >10%. Overall, the T-cell fraction among all cells (estimated through TCR sequencing; see Methods) was significantly (P < 0.05; unpaired t test) higher within the BrMETs, whereas the TCR repertoires within GBM were determined to have a higher degree of clonality (P < 0.05; unpaired t test; Fig. 5B). To investigate the degree of intratumoral heterogeneity of the T-cell repertoires, we next explored the distribution of the top 10 clones within each tumor. The distribution of the expanded TCRs within each tumor differed greatly between patients, with some exhibiting marked T-cell homogeneity among all examined regions, whereas others showed profound intratumoral diversity (Fig. 5C). Several of the GBMs were particularly heterogeneous, with locally expanded T-cell clones present at frequencies greater than 10% within one region of the tumor but significantly less than 1% in all other regions. We then quantified this diversity at a repertoire-wide level through pairwise comparisons using the Morisita overlap index (MOI), a measure of similarity between populations based on the number of shared sequences and their relative frequencies. As expected, the MOI values approached 0 (no similarity) for repertoires from different patients but were variable within patients (Fig. 5D). Comparing the tumor types, we detected a higher level of intratumoral repertoire similarity among BrMETs than among GBMs whether the MOI or a similar cosine similarity index was used (Supplementary Fig. S9B and S9C). Therefore, the increased spatial heterogeneity of variants and neoantigens within GBM is recapitulated at the TCR repertoire level. Ultimately, to further validate that the expanded clones were specifically enriched within tumor tissue, we performed TCR sequencing on the peripheral blood from eight of these patients (five with GBM/three with BrMET). As expected, the T-cell repertoire from peripheral blood mononuclear cells (PBMC) trended toward lower clonality than the matched tumor-infiltrating lymphocytes (TIL; Supplementary Fig. S10A). Intriguingly, the degree of repertoire similarity between PBMCs and TILs appeared greater (P = 0.07) within BrMETs than within GBMs, perhaps suggestive of a stronger systemic immune response in patients with metastatic disease ( Supplementary Fig. S10B). Tracking the most highly expanded intratumoral clones across each patient confirmed that most were detectable in peripheral blood but at substantially reduced frequencies ( Supplementary Fig. S10C). In addition to this clonotypic diversity, we also observe differential V and J gene usage between peripheral blood and matched TILs, suggestive of a combination of both VJ-dependent and VJ-independent divergence as previously described (Supplementary Fig. S11; ref. 49). Immunogenomics of a Patient with Hypermutated Recurrent GBM Given the growing literature studying the genomics of hypermutated GBM and the ongoing assessment of whether tumors exhibiting this genotype may be more responsive to immunotherapeutic approaches (10,11,27), we characterized the immunogenomic landscape of a tumor with this phenotype. GBM065.Re presented with tumor progression of an IDH1-mutant anaplastic astrocytoma 5 years after initial resection. In the interim, the patient was treated with four cycles of vincristine, 1-(2-chloroethyl)-3-cyclohexyl-lnitrosourea (CCNU), and procarbazine; proton radiotherapy; and six cycles of high-dose temozolomide (Fig. 6A) Fig. 6B). Variant analysis revealed that two regions from the recurrent tumor contained the MSH6 T1219I variant previously identified in Lynch syndrome and known to act in a dominant-negative manner (refs. 50, 51; Supplementary Fig. S12A). The other two regions both contained unique MSH6 mutations not previously reported (G1148S and G1116D) but computationally predicted as "likely to impair molecular function" by a previously published tool for the prediction of MSH6 variant significance (52). Additional mutations in DNA mismatch repair genes such as MLH3 and POLD3 were detected in some but not all regions of the recurrent tumor. This heterogeneity of potentially pathogenic mismatch repair defects in hypermutated recurrent GBM was observed in one prior patient and suggests either the emergence of multiple unique routes to hypermutation occurring within the same tumor or a common alternative mechanism unrelated to these genes (34). An analysis of the mutational signatures within the recurrent tumor revealed a significant enrichment for signature 11, known to be associated with prior treatment with temozolomide (28,29) and previously reported to be enriched in hypermutated recurrent gliomas (33). We observed that most variants identified within each region of the recurrent tumor were subclonal private and not spatially distributed ( Fig. 6C; Supplementary Fig. S12B). A small fraction (30 total) of variants were shared between all regions of the recurrent tumor, and an even smaller number (15 total) were shared by the primary and all sectors of the recurrence. Importantly, these included likely drivers of the tumor such as TP53 and IDH1 alterations. This remarkable heterogeneity is consistent with a prior report in which two regions of a hypermutated recurrent tumor were sequenced and shared less than 2% of all identified mutations (34). In contrast to the variant and neoantigen heterogeneity within this tumor, the TCR Vβ repertoires within two analyzed regions were more similar (MOI = 0.85) and had the highest clonality across all samples in the cohort. Remarkably, the top three T-cell clonotypes within GBM065.Re made up more than 37% of the intratumoral repertoire, suggesting a significant degree of clonal expansion. In addition, these dominant clones were all present at substantially reduced frequencies (<1%) within the patient's peripheral blood, confirming the specific intratumoral expansion of these cells (Fig. 6D). Finally, to more deeply characterize the immunologic landscape of GBM065.Re, we performed single-cell RNA sequencing (scRNA-seq) with TCR enrichment on sorted CD45 + immune cells (n = 1,728) isolated from this tumor immediately following surgical resection. We found that most of the cells were of the lymphoid lineage and were predominantly cytotoxic CD8 + T cells (59% of sequenced cells) with smaller populations of naïve and CD4 + regulatory-like T cells (Fig. 6E; Supplementary Fig. S13). The highly expanded T-cell clones identified through bulk TCR Vβ sequencing also were represented in the scRNA-seq data but were found dispersed throughout the CD8 + T-cell clusters, suggesting heterogeneous gene expression patterns among these clonal populations. We performed differential expression analysis on the hyperexpanded clonotypes and found them enriched for markers of activation such as KLRC3, KLRC4, and GZMK together with MHC class II genes classically upregulated by activated T cells (Fig. 6F). Thus, despite substantial heterogeneity at the variant and neoantigen level, the hypermutated GBM065.Re contains clonally expanded T cells with an activated phenotype distributed throughout the tumor. DISCUSSION Despite numerous advances in both targeted and immunebased therapies, the prognosis for patients with malignant brain tumors such as GBM or BrMETs remains poor. Within GBM, one potential rationale for the high rate of treatment failure is the extensive intratumoral cellular and molecular heterogeneity (12) owing to complex tumor clonal dynamics. However, the impact of this tumor cell diversity upon the tumor-immune microenvironment remains unclear. Furthermore, the spatial heterogeneity of both tumor and immune landscapes within BrMETs requires further study. To address these knowledge gaps, we performed comprehensive immunogenomic profiling on multiple spatially distinct regions from a cohort of 30 patients with primary or secondary malignant brain tumors. We observed a striking distinction in the distribution of somatic variants, with most mutations usually shared by all analyzed regions within BrMETs, whereas gliomas were markedly heterogeneous and subclonal. This dichotomy extended to the distribution of candidate neoantigens within these tumors, whereas the intratumoral distribution of targetable CT antigens was more homogeneous. Furthermore, the intratumoral TCR repertoire was significantly more similar between spatially distinct regions of BrMETs than gliomas, which often harbored locally expanded T-cell clones. Previous studies have reported on the significant genomic and transcriptional variability between spatially distinct regions of gliomas (19)(20)(21) and hypothesized the consequences of heterogeneity on treatment resistance. However, our results suggest that this spatial heterogeneity of genomic alterations is minimal within metastatic brain tumors, which instead display a markedly more clonal distribution. We envisage that this difference may be due to the separate evolutionary trajectories of these tumor types. Recent studies suggest that GBMs arise from the slow accumulation of somatic mutations in neural stem cells, during which time multiple subclones can develop before presenting as a clinically apparent tumor (53). However, secondary BrMETs likely develop quickly from the rapid growth of an already transformed subclone upon arrival into the CNS. This malignant clone will quickly develop into an apparent tumor, allowing less time for the development of genetically disparate subclones. Previous studies have shown that these metastatic clones can accumulate additional genomic alterations relative to the matched primary, thereby providing potentially unique therapeutic targets that are likely clonal within the metastatic tumor (22,54). Future work should explore whether this tumor spatial homogeneity represents a specific feature of BrMETs or is a more general characteristic of secondary metastatic tumors. Importantly, the comparative distinctions we observed extend to the distribution of candidate class I and class II Cancer Research. on December 27, 2021. © 2021 American Association for cancerdiscovery.aacrjournals.org Downloaded from neoantigens, as patients with metastatic tumors harbor a significantly higher proportion of clonal neoantigens. This dichotomy between BrMETs and gliomas could have profound implications on antitumor immunity, as studies have reported that T-cell immunoreactivity against clonal neoantigens drives sensitivity to checkpoint blockade treatment (17,18). However, additional work is needed to characterize the degree of antitumor immunity within malignant brain tumors and determine the relative contributions of clonal and subclonal neoantigens in stimulating immune responses. In particular, detailed analysis of the antigenic targets of the T-cell clones within tumors will be critical to understanding how antigen clonality shapes immunogenicity. Recent studies have broadened our understanding of the immune microenvironment within gliomas and BrMETs through methods such as mass cytometry, RNA sequencing, and immunofluorescence (47,48). Our work builds on this by exploring the spatial heterogeneity of the immune response and performing a deeper analysis on the TIL populations. Through immune profiling from RNA-sequencing data, we did not detect significant intratumoral differences in either the immune infiltrate or activation state for either tumor type, in contrast to the significant heterogeneity of variants and neoantigens within gliomas. However, many of the immune profiling analyses performed are to some extent limited in their ability to resolve specific immune cell populations and activation states owing to their reliance on bulk RNA-sequencing data. Further spatial analysis with more sensitive metrics such as flow cytometry, immunofluorescence, scRNA-seq, and/or spatial transcriptomics would be needed to clarify whether this is true immune homogeneity or a limitation of bulk RNA-sequencing analysis. In contrast, TCR repertoire sequencing did uncover substantial spatial heterogeneity. Although both GBMs and BrMETs demonstrated evidence of expanded intratumoral T cells, many of the dominant clones within GBM samples were highly spatially restricted. These data suggest that tumors harbor complex immune microenvironments in which a given tumor may contain pockets of clonally expanded T cells adjacent to regions with minimally expanded T cells. Whether this T-cell heterogeneity is due to the recognition of spatially diverse subclonal antigens or the result of extrinsic regional features such as inflammatory cytokines that promote clonal expansion requires further study. Tumor mutational burden has been shown to correlate with checkpoint blockade response in some studies (18,55), and hypermutated tumors resulting from germline or acquired DNA repair deficiencies have been shown to be uniquely responsive to immunotherapy across other tumor types (56). Within hypermutated GBM, which occurs in up to 20% of recurrent disease, the efficacy of checkpoint blockade and other immunotherapies remains an open question. Several reports have observed clinical responses from checkpoint blockade in patients with hypermutant GBM harboring germline DNA repair defects, but a recent retrospective analysis found no benefit in patients with mismatch repair-deficient GBM treated with the PD-1 blockade (10,11,27). Ongoing clinical trials are designed to address this question, such as the Alliance study A071702/NCT04145115 [a study testing the effect of immunotherapy (ipilimumab and nivolumab) in patients with recurrent glioblastoma with elevated mutational burden]. Although characterization of more patients is needed to generalize our findings, the immunogenomic profiling of GBM065.Re revealed potentially unique features of these tumors. First, most mutations and associated neoantigens within this hypermutant tumor were subclonal private. These data, together with our other findings and published work (21), indicate that single-site profiling analysis would be inadequate and dramatically underestimate both tumor complexity and neoantigen burden. Second, despite this dramatic heterogeneity of variants, we observed remarkably similar TCR repertoires with highly expanded CD8 + T-cell clones within different regions. scRNA-seq analysis on these expanded clones shows evidence of T-cell activation, suggesting potential tumor reactivity. Whether these clones react to the small subset of clonal neoantigens, the thousands of regional neoantigens, or overexpressed CT antigens or are responding in a nonspecific way to inflammatory stimuli will require further analysis. However, our data indicate that hypermutant tumors can contain highly activated and clonally expanded T cells while also representing an extreme of mutational and neoantigen heterogeneity. Ongoing work is directed at understanding how the immune system may direct responses to antigen targets in the context of this heterogeneity. Taken together, these results carry immediate significance for the design and implementation of clinical studies in malignant brain tumors. The extensive intratumoral heterogeneity within gliomas indicates that single-site genomic analysis will not capture the totality of targetable mutations and neoantigens. This is of particular importance in the design of targeted therapy studies and/or neoantigen vaccines, as the analysis of multiple regions identifies a significantly higher number of targetable neoantigens than one site alone. We have already begun to implement this approach in a trial of a personalized neoantigen vaccine in patients with newly diagnosed GBM (NCT03422094). An additional benefit of this approach is increased confidence in the clonality of targeted antigens, as clonal neoantigens would presumably represent ideal targets. However, it remains to be seen whether sampling additional regions beyond the number in this study and the aforementioned clinical trial would shed further light on the molecular landscape. Furthermore, the complexity of the intratumoral T-cell repertoire argues that multiple sites should be analyzed for the potential expansion of neoantigen-reactive T cells or isolation of tumor-specific TCRs for therapy. On the other hand, the relative homogeneity of BrMETs suggests that a single region is sufficient for genomic and immunologic phenotyping. Overall, this study provides an immunogenomic profiling of malignant brain tumors in a multisector approach showcasing substantial intratumoral heterogeneity within gliomas while highlighting surprising homogeneity among BrMETs. These distinctions hold for both cancer cell-intrinsic (genomic alterations) and cancer cell-extrinsic (TCR repertoire) features. A growing understanding of the tumor-immune microenvironment and an appreciation of its spatial complexity may improve the efficacy of immunotherapy in patients with malignant brain tumors. Patient Recruitment All study participants were neurosurgical patients at Barnes-Jewish Hospital with pathologically confirmed stage III/IV glioma or metastatic disease. Prior to surgery, we obtained written informed consent from the patients for a Washington University School of Medicine Institutional Review Board-approved protocol (#201111001) for the analysis of tumor tissue and peripheral blood and sharing of genomic data. All procedures and experiments were performed in accordance with the ethical standards of the 1964 Declaration of Helsinki. Clinical information related to each patient was collected at the time of surgery and is summarized in Supplementary Fig. S1. Clinical Sample Processing and Nucleic Acid Extraction Tumor samples were processed immediately following surgical resection under sterile conditions. Tissue was thoroughly washed in PBS to eliminate peripheral blood leukocyte contamination. For tumor samples that were resected en bloc, spatially distinct sectors were dissected out from the mass via scalpel. In other cases where multiple discrete regions demonstrated enhancement on MRI, tissue was separated at the time of surgery and a representative sample from each sector was chosen for analysis. In all cases, tissue was immediately flash-frozen and initially stored at −80°C or in liquid nitrogen until further use. Peripheral blood was separated through Ficoll-Paque PLUS density gradient (GE), and the buffy coat was collected and frozen for matched normal genomic DNA. Total RNA and genomic DNA was extracted from peripheral blood mononuclear cells or frozen tissue using the Qiagen AllPrep DNA/RNA Kit (catalog no. 80204) according to the manufacturer's instructions (Qiagen). As melanin is a known inhibitor of enzymatic reactions and coprecipitates with RNA (57), further purification was performed for the melanoma sample as described (58), modified with an RNeasy (Qiagen) column-based cleanup to remove the additives used to bind the melanin. For GBM065.Re, regions 3 to 4 and the sample from the primary tumor came from formalin-fixed, paraffinembedded tissue cores. In these cases, total RNA and genomic DNA were extracted using the Qiagen AllPrep DNA/RNA FFPE Kit (catalog no. 80234). Sequencing and Somatic Variant Detection All sequencing was performed on the Illumina NovaSeq (S4) platform. From each patient, a normal sample and two to four samples from a single tumor were subjected to WES. Eighty of 93 tumor samples had sufficient tissue to perform RNA sequencing. WES and RNA sequencing from 20 tumors were performed at the McDonnell Genome Institute, and 10 tumors underwent WES and RNA sequencing at the Institute for Genomic Medicine at Nationwide Children's Hospital. WES was performed on two additional recurrent GBM065 regions and a GBM065 primary tumor by Novogene. Read alignment, somatic variant calling, variant filtering, variant effect prediction, additional variant annotation, and RNA expression estimation were performed using a tumor analysis pipeline defined in the Genome Modeling System as previously described (59). Briefly, all fastq files were aligned to the human reference genome build GRCh38 with HISAT2(RRID:SCR_015530) for RNA (60) and BWA-MEM for DNA (61). Somatic variant calling was performed using Strelka (62), VarScan (63), Mutect (64), and Pindel (65). To remove any false-positive variants and discover TERT promoter mutations, custom capture validation sequencing was performed to an average depth of 582× for all unique variant sites using NimbleGen SeqCap EZ Prime Choice Probes (12,388 total probes created from 11,425 variants and 51 genes; 1.49 Mb of sequence targeted for capture). Tumor purity was estimated using TPES (66) for samples with sufficient variant count (>20), and for those with insufficient variants, the median variant allele frequency was used (2 × median VAF). SciClone (67) and ClonEvol (68) were used to assess the clonality of mutations and subclonal evolution of primary and recurrent GBM065 tumors. CN Variation Detection Matched tumor/normal WES data were used to predict CN variations (CNV) with CNVkit (69). Each tumor region sample was compared pairwise against the single matched normal data for the patient. CNVkit was run (via "cnvkit batch") using default parameters. Regions of alignment were summarized to a resolution of 10 kb, and these data were subjected to segmentation using circular binary segmentation. The resulting segmentation files were divided into three cohorts: (i) primary and recurrent GBMs, (ii) breast cancer BrMETs, and (iii) NSCLC BrMETs. GISTIC2 (70) was run on the segments of each cohort with a q value cutoff of 0.1. Relative amplitude thresholds were used to create the heatmaps and clonality plots in Fig. 2. GBM Molecular Subtyping Single-sample gene set enrichment analysis scores (71) were calculated for each sample in the cohort using the previously defined gene sets (35) and the GSVA R package (72). The circlize (RRID:SCR_002141) R package was used for the visualization of these molecular subtypes (73). Neoantigen Prediction Clinical class I and class II HLA typing was performed on each normal sample by Histogenetics to two field resolution. For each tumor sample, a VCF file containing all passing somatic variants was annotated with Ensembl VEP (ref. 74; RRID: SCR_002344) using the parameters -everything, -flag_pick, and -plugin (Wildtype and Downstream). The VCF was further annotated with DNA and RNA read counts using VAtools. Using this VCF as input, a containerized version of pVACtools (ref. 37; DockerHub: griffithlab/pvactools:1.5.0) was used to predict and annotate likely neoantigens in each sample as previously described (75). Briefly, using the submodule "pvacseq run," we performed peptide/MHC binding affinity predictions with eight class I and four class II algorithms (NNalign, NetMHC, NetMHCIIpan, NetMHCcons, NetMHCpan PickPocket, SMM, SMMPMBEC, SMMalign, MHCflurry, MHCnuggetsI, MHCnuggetsII). For this analysis, to be considered a neoantigen candidate, it must have arisen from a variant with tumor DNA VAF >5% and median binding affinity (across all algorithms) <500 nm. No sequences were excluded based on tumor RNA VAF, coverage, or gene expression values to ensure that no high-quality candidates were excluded based solely on RNA data. Neoantigens that pass this filter are designated "candidate neoantigens" in the article. Genomic Alteration Distribution All genomic alterations (variants and associated neoantigens, CNVs) were determined independently with each region as an individual sample. They were then aggregated together for the plots in Fig. 1 to provide an overall description of each tumor. We then determined the distribution of each alteration between regions from the same tumor. If the same change was observed in all regions of the tumor, it was defined as "clonal." If it was identified in more than one but not all regions, it was defined as "subclonal shared." Alterations observed in only one region of each patient sample were defined as "subclonal private." Cancer/Testis Antigen Analysis A subset of cancer/testis antigens was chosen for analysis based on a combination of prior work targeting them in GBM (ref. 38 AACRJournals.org (TPM) of each gene normalized to the median tissue expression for "Brain-Cortex" in the Genotype-Tissue Expression Project database. Cancer/testis antigen scores were generated for the metastatic samples through normalizing to "Brain-Cortex" expression or the associated matched primary tissue site ("Breast Mammary Tissue," "Lung," or "Skin Sun Exposed"). The intratumoral spread of cancer/testis antigen expression was assessed by calculating the variance of scores within the regions of a tumor or by calculating the cosine similarity between regions, where each antigen's score in a given region represents one component of the vector. RNA-Sequencing Analysis and Immune Microenvironment Profiling Gene and transcript quantification was performed using kallisto (76). Differential expression analysis between tumor types was then performed using sleuth (77). To quantify immune cell populations in the tumors, CIBERSORT (43) was run using the LM22 signature gene file and disabled quantile normalization for 100 permutations. Relative immune cell abundance was obtained using gene expression as previously described by Danaher and colleagues (42). PCA dimensionality reduction was implemented on Danaher scores for each tumor region in R version 3.6.2 using the ggbiplot package. Intratumoral similarity was performed by representing Danaher scores as vectors and calculating the normalized dot product between pairs of regions from the same tumor. The immune intratumoral heterogeneity was defined for tumors with three distinct regions as the area in PC1-PC2 space of the triangle whose vertices were represented by each region's Danaher score. The myeloid-specific analyses were performed through the generation of log-transformed scores, making use of gene sets defined in previous publications. A total of 11 and 10 genes were used to define the M1 and M2 scores, respectively, based on gene expression signatures of in vitro polarized M1 or M2 macrophages (45). A total of six distinct genes for each were used to define the scores for both microglial and monocyte-derived macrophages (46). These genes were selected based on differential expression in tumor-associated macrophages from distinct lineages in scRNA-seq data of human gliomas. To account for differential abundance of macrophages between samples as a reason for findings, the difference between these scores (corresponding to a ratio of gene expression) was taken to generate an "M2-M1 skew" or "MDM-microglial skew." TCR Repertoire Sequencing Sequencing of the CDR3 regions of TCR β-chains was performed through the immunoSEQ Assay (Adaptive Biotechnologies) and used the same DNA extracted from tumor samples used for WES. Initial analysis was performed on the immunoSEQ ANALYZER 3.0. Additional analysis was performed using the immunarch package in R. Visualization of the V-J gene usage among T-cell repertoires was performed using the circlize (RRID:SCR_002141) package in R (73). The intratumoral T-cell fraction was calculated by dividing the number of productive TCR templates by the number of nucleated cells estimated by the amplification of reference genes. The Simpson clonality for each region was calculated by taking the square root of the Simpson diversity index for all productive rearrangements with possible values ranging from 0 (very polyclonal) to 1 (predominantly monoclonal or oligoclonal). The similarity between TCR repertoires was assessed through either the normalized dot product (cosine similarity) or Morisita overlap between the vectors of TCR clonotype abundances. Both of these metrics are based on pairwise comparisons between two repertoires. The T-cell repertoires for two regions are represented by vectors with indices covering the union of TCRs observed in either of the corresponding regions. Each position of the vector then represents the abundance (as a count) of a given clonotype within that region. where A i , B i , and n represent the same values as before and N A and N B represent the total number of productive rearrangements observed in region A or B, respectively. Both of these metrics provide a value between 0 and 1, where 0 represents no similarity (orthogonal vectors or completely disparate populations) and 1 represents complete similarity (parallel vectors or completely identical populations). Comparisons between tumor types were performed by taking all intratumoral comparisons between GBMs and BrMETs and grouping them. Sample Preparation for Single-Cell Sequencing For GBM065.Re, the fresh surgical sample was rinsed with PBS to remove visual blood contaminant and manually dissociated using frosted microscope slides and gentle trituration. The resulting single-cell suspension was passed through 100-mm and 70-mm filters before undergoing Percoll (GE Healthcare Life Sciences) density gradient centrifugation to remove myelin contamination. Following this separation, the resulting cell pellet underwent RBC lysis with ACK Lysis Buffer (Lonza Biosciences) and was frozen in 90% FBS and 10% DMSO at −80°C and later stored in liquid nitrogen until further use. The tumor sample was later thawed, and the single-cell suspension was stained with anti-CD45, anti-CD11b, anti-CD3, and Zombie NIR Viability Dye (BioLegend). Live CD45 + single cells were purified by FACS on a BD FACSAria II with an 85-mmol/L 45-psi nozzle into a buffer of PBS with 0.04% BSA. A total of 13,000 were submitted for analysis. Single-Cell Library Preparation cDNA was prepared after the Gel Beads in Emulsion (GEM) generation and barcoding, followed by the GEM-RT reaction and bead cleanup steps. Purified cDNA was amplified for 10 to 14 cycles before being cleaned up using SPRIselect beads. Samples were then run on a bioanalyzer to determine the cDNA concentration. TCR target enrichment was done on the full-length cDNA. GEX and enriched TCR libraries were prepared as recommended by the 10× Genomics Chromium Single Cell V(D)J Reagent Kits (v1 Chemistry) user guide with appropriate modifications to the PCR cycles based on the calculated cDNA concentration. For sample preparation on the 10× Genomics platform, the Chromium Single Cell 5′ Library and Gel Bead Kit (PN-1000006); Chromium Single Cell A Chip Kit (PN-1000152); Chromium Single Cell V(D)J Enrichment Kit, Human, T Cell (96rxns)(PN-1000005); and Chromium Single Index Kit T (PN-1000213) were used. The concentration of each library was accurately determined through qPCR using the KAPA library quantification kit according to the manufacturer's protocol (KAPA Biosystems/Roche) to produce cluster counts appropriate for the Illumina NovaSeq6000 instrument. Normalized libraries were sequenced on a NovaSeq6000 S4 Flow Cell using the XP workflow and a 151 × 8 × 151 sequencing recipe according to the manufacturer's protocol. A median sequencing Cancer Research. on December 27, 2021. © 2021 American Association for cancerdiscovery.aacrjournals.org Downloaded from depth of 50,000 reads/cell was targeted for each Gene Expression Library and 5,000 reads/cell for each V(D)J (T-cell) library. Single-Cell Sequencing Analysis Raw sequencing data were processed with Cell Ranger, version 3.0.1 (78), from 10× Genomics and mapped onto a human genome reference (GRCh38-2020-A). Downstream analysis was performed using the Seurat (RRID: SCR_007322) R package version 3.2.0 (79). In total, 2,147 cells passed the quality control steps performed by Cell Ranger. Low-quality cells and potential doublets were accounted for by removing cells that contained fewer than 500 expressed genes, a nCount value greater than the nCount value of the 93rd percentile of the total sample, or more than 10% mitochondrial transcripts. Following filtering, 1,728 cells remained within the final data set. Genes that were expressed in fewer than 100 cells were also removed. For each cell, expression of each gene was normalized to the sequencing depth of the cell, scaled to a constant depth (10,000), and log-transformed. Variable genes were selected with default settings, and PCA was performed on the variable genes. Dimensionality reduction and visualization were performed with the uniform manifold approximation and projection (UMAP) algorithm (Seurat implementation) using the first 15 PCA dimensions. Unsupervised graph-based clustering of cells was performed using the mentioned PCA dimensions with a resolution of 0.8. Enriched gene expression levels in each cell cluster were identified by a Wilcoxon rank sum test-based function. These genes, along with common cell-type markers, were used to establish the cell identity of each cluster. Projection of average expression of marker genes into UMAP or violin plots was used for cell-type identification. Gene expression signatures used for definition of clusters were as follows: CD3E, CD3D, and CD3G (T cells); NKG7, PRF1, GZMH, and CD3 − (natural killer cells); MS4A1, CD79B, and CD3 − (B cells); and CD14, S100A8, S100A9, C1QC, CD68, CTSD, and HLA-DR + (monocytes/macrophages). Second-level clustering of T cells was performed by subsetting only T-cell clusters and rerunning scaling, log transformation, and variable gene selection. In addition, PCA was performed again on the new variable genes. Dimensionality reduction and visualization were performed with the UMAP algorithm using the first 10 PCA dimensions. Unsupervised graph-based clustering of cells was performed using the mentioned PCA dimensions with a resolution of 1.0. Gene expression signatures used for definition of clusters were as follows: CCR7, SELL, TCF7, and KLF2 (naive/central memory T cells); Il7R, KLRB1, and SELL − (effector memory T cells); TIGIT, CTLA4, and CD4 (CD4 + regulatory-like T cells); and NKG7, PRF1, CCL5, GZMH, CD3, CD8A, and CD8B (cytotoxic CD8 + T cells). V(D)J libraries were processed with CellRanger V(D)J, version 2.0.0, from 10× Genomics and mapped onto a human VDJ reference (GRCh38-2.0.0). Clonotype analysis was performed with the scRepertoire (version 0.99.17) R package (80). Clonotypes were defined as the combination of the genes of the TCR A and B chains and nucleotide sequences as previously discussed (81). Statistical Analysis Data analysis and visualization in R was performed using the tidyverse package. Statistical significance for variant, neoantigen, CNV clonality estimates, heterogeneity estimates, T-cell fraction, and T-cell clonality was performed using an unpaired t test. Significance for differential expression was determined by a multiple t test with Benjamini-Hochberg adjustment with an FDR = 0.05. Data Availability Exome and RNA-sequencing read data have been made available in the controlled access repository, the database of Genotypes and Phenotypes (dbGaP), with accession number: phs002612.v1.p1. Original TCR sequencing data are available upon request.
2021-10-07T06:17:15.389Z
2021-10-05T00:00:00.000
{ "year": 2021, "sha1": "3a2edaf92de5f66b0a8d39d27baa25c337f80a4f", "oa_license": "CCBYNCND", "oa_url": "https://cancerdiscovery.aacrjournals.org/content/candisc/early/2021/10/04/2159-8290.CD-21-0291.full.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "7030ed64c7d2b41ac4ca27cd9fff578e8567367d", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
74507637
pes2o/s2orc
v3-fos-license
A Pathologist’s perspective of penile carcinoma – an institutional study at Indian Red Cross Hospital, Nellore Background: Penile cancer is an unusual malignancy with higher incidence rates in developing countries like India when compared to the Western world. Incidence varies from 0.7-2.3 cases per 100,000 men in urban India and 3 cases per 100,000 men in rural India. In spite of its rarity, it forms a suitable medical model for theranostics. Given this relevance we put forward our departmental experience in a rural Indian setup. Materials and Methods: This is a retrospective three year study of penile SCC patients managed in Indian Red Cross Cancer Hospital, India. Data was compared with similar studies across the world. Results: 23 patients were diagnosed with squamous cell carcinoma of penis during the period of study. We witnessed in this study that a relatively younger age of presentation and early stages prevailing. Higher percentage of involvement of prepuce and body was also noted. Conclusion: Consideration of prognostic histopathological factors may help to tailor appropriate management in this infrequent malignancy. Introduction Penile carcinoma is a rare malignancy with incidence peak in the sixth and seventh decades of life 1 . 95% of the cases histologically correspond to squamous cell carcinoma (SCC). 2,3 There appears to be an ethnic variation in the incidence rates internationally and data from nonwhite patients is limited. 4 The etiology of penile cancer remains unclear. Strongly associated risk factors include Human Papilloma Virus type 16 (HPV 16) infection, phimosis, lack of circumcision and cigarette smoking. 5,6 Penile cancer is commonly seen in men of low socio economic status, with poor hygiene contributing significantly. Due to its superficial location, penile cancer lends itself to early detection and management. However, many patients present for treatment at an advanced stage due to psychological inhibitions. The treatment includes surgery with adjuvant chemoradiotherapy. The surgical procedure may include circumcision, local excision, partial penectomy or even complete penectomy. A 'wait and watch' policy is often preferred over prophylactic lymphadenectomy. 7,8 Inguinal lymph node metastases (LNM) are an important prognostic factor in survival for carcinoma of penis. Results from the studies of Pandey et al and Graafland et al showed that extranodal extension, bilateral inguinal metastasis and pelvic node metastasis were prognostic factors in node positive patients. 9,10 Overall 5-year survival ranges from 27% in patients with clinically positive nodes to 66% in patients with clinically negative nodes. 11 To put forward the data on the biology of this cancer we undertook a review of all cases of penile cancer diagnosed and treated at the Indian Red Cross Society Hospital, Nellore, India from 2010 -2012. The aim was to determine the prevalence and clinicopathological correlates of penile cancer in a sample of Indian population and compare the data with other international studies. Materials and Methods We retrospectively reviewed the medical records of 30 patients with penile lesions from January 2010 to December 2012. 7 patients were found to have only moderate dysplasia on two consecutive histopathological examinations and were excluded. The remaining 23 patients were diagnosed with penile carcinoma and got themselves treated at the place of study i.e Indian Red Cross Hospital, Nellore, India. To note is the tertiary care nature of this hospital which offers free of cost treatment to the economically backward rural Indian population. Data included patient age, circumcision status and history of sexually transmitted infections. Penile lesion size, location, presence of palpable inguinal nodes and nature of treatment were noted. Information was obtained from the Pathology department regarding the histological subtype, degree of differentiation and Pathological stage. Patients were followed up at 3 month intervals for a period of 1 year uniformly. Follow up included both physical and ultrasound examinations. Tumour stage was classified according to the 2009 UICC International Union against Cancer Tumour Node Metastasis stage classification system. 12 Histological grade was assigned according to the modified three tier Broders grading system. 13 The node status was evaluated by the occurrence of LNM on biopsy during follow up or by the results of lymphadenectomy. Results There were 23 penile cancer cases during the study with a mean age of 50.3 years. The youngest patient was 26 years old and the oldest 90 years. Most of them had little educational background and were reluctant to reveal data about extramarital sexual contact. None of them had any associated urinary tract or sexually transmitted infection. Histologically all 23 patients had SCC of which 1 patient had verrucous variant and 1 had microinvasive variant. Preoperative biopsy and surgical specimens were reviewed and correlated. Using the modified three level Broder's classification grading was assigned. Grade I (well differentiated tumor) was found in 12 cases (52.17%) and Grade II ( moderate differentiated tumor) was found in 11 cases (47.83%). No grade III lesions were diagnosed. 9 cases (39.13%) with lesion size less than 2 cms, 9 cases (39.13%) with lesion size between 2-5 cms and 5 cases (21.74%) with size greater than 5 cms were noted. Among these, two lesions with size ranging between 2-5 cms and one lesion greater than 5 cms showed lymph node metastases (LNM). 3 patients (13.05%) had inguinal lymphadenectomy on account of metastases. One patient had recurrence following radiotherapy. 2 patients had single inguinal node involvement (N1 -8.70%). One patient had three nodes positive for metastasis among 30 excised nodes (N2 -4.35%). Findings consistent with HPV infection was found in one patient on H&E examination. International Journal of Medical Research and Review Available online at: www.ijmrr.in 23 | P a g e Data comparison is summarized in the below table. Discussion The present study demonstrates that penile cancer is uncommon in Indian men. Because of its rarity effectiveness of HPV screening methods is yet to be known and hence they have not been recommended. And because of its low incidence and low rate of follow up, determination of prognostic factors for cancer specific survival has been challenging. Mean age was found to be 50.3 years. This was comparatively lower to the mean age from other studies around the world. The prognostic role of age is controversial. The specific reason behind advanced age being a poor prognostic factor is still unknown. 18,19 Histologically all our cases were SCC which coincides 100% with the study from Indonesia. This goes well with the existing literature showing SCC as the most frequent histopathological variant accounting for more than 85%. 20,21 Histological grade carries an established prognostic significance in malignant lesions. Higher the histological grade, the higher the chances of metastases and poorer the prognosis. In this regard our study shows an incidence of wellmoderate differentiated lesions only, with three of the moderately differentiated Grade II lesions showing LNM. This supports the opinion of Chen et al & Hegarty et al who concluded that " histological grade and not tumour stage is an important prognostic predictive factor for regional LNM". 22 Regarding clinical morphology, ulceration in most cases indicates tumour invasion. In our study ulcerative lesions alone or in combination with vegetative pattern were predominantly noted. As regards the size of the lesions greater the size, greater the chances of LNM and greater the role of adjuvant therapy. This is well established by our study which clearly shows LNM to be present in lesions greater than 2 cms. In this study, the predominant location was the glans, alone (47.83%) or associated with other regions of the penis (13.04%). It was followed by the prepuce, which was affected alone in 26.09% of cases and body being involved in 13.04% of cases. This goes well with the investigated data established showing involvement of glans in 48% of cases. 23,24 However our study shows a higher percentage of involvement of prepuce and body when compared to existing data. The incidence of early stages of the tumour and absence of LNM indicated the need for partial penectomy as can be observed in our study. Histopathologically on H & E stained slides, findings consistent with HPV infection could be found out in one case only. Also, only one case with LNM showed recurrence following radiotherapy. The work we present has relative limitations: for example, the retrospective 3-year nature of our study design, the relatively small sample size, limitation of high non compliance rate of patients to follow up and the lack as well as the expensive nature of disease related molecular markers. Conclusion Our study puts forward histopathological findings in penile cancer and correlates prognostic factors with other international studies. This helps to understand better the biological behaviour of penile cancer across the world and thereby helps to systematize the manner of treatment. In addition we highlight the growing need for patient education and protocols for multiprofessional and interdisciplinary approach to this rarer form of malignancy.
2019-03-12T13:04:55.118Z
2014-02-28T00:00:00.000
{ "year": 2014, "sha1": "6fcbe0ce92c765b70deb3509ea71af3b2ebea6a3", "oa_license": "CCBYNC", "oa_url": "https://ijmrr.medresearch.in/index.php/ijmrr/article/download/53/101", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "3ee9ece1875c8f46536bb77625cdb8c7b6b62cbc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
6195314
pes2o/s2orc
v3-fos-license
Aurintricarboxylic acid increases yield of HSV-1 vectors Production of large quantities of viral vectors is crucial for the success of gene therapy in the clinic. There is a need for higher titers of herpes simplex virus-1 (HSV-1) vectors both for therapeutic use as well as in the manufacturing of clinical grade adeno-associated virus (AAV) vectors. HSV-1 yield increased when primary human fibroblasts were treated with anti-inflammatory drugs like dexamethasone or valproic acid. In our search for compounds that would increase HSV-1 yield, we investigated another anti-inflammatory compound, aurintricarboxylic acid (ATA). Although ATA has been previously shown to have antiviral effects, we find that low (micromolar) concentrations of ATA increased HSV-1 vector production yields. Our results showing the use of ATA to increase HSV-1 titers have important implications for the production of certain HSV-1 vectors as well as recombinant AAV vectors. INTRODUCTION Recombinant adenovirus-associated virus (rAAV) vectors have been successfully introduced in several human gene therapy clinical trials because of their nonpathogenic nature, low toxicity, minimal immunogenicity, and long-term persistence. Production of large quantities of clinical grade rAAV vectors for gene therapy has been challenging due to limitations in scalability of the commonly used co-transfection protocol. 1 AAVs are not able to replicate by themselves and were first found to propagate only when adenoviruses or herpes viruses coinfected the same cells. 2,3 The first scalable rAAV protocol was based on adenovirus infection of rAAV/Rep-Cap cell lines. 4 Besides adenoviruses, also herpesviruses have been shown to provide complete helper virus functions for the production of AAV virions. 5,6 The minimal set of herpes simplex virus type-1 (HSV-1) genes required for AAV replication and packaging has been identified as the HSV-1 early genes UL5, UL8, UL52, and UL29. 7 These genes encode components of the HSV-1 core replication machinery-the helicase, primase, and primase accessory proteins (U L 5, U L 8, and U L 52) and the single-stranded DNA-binding protein (U L 29). A protocol for production of rAAV serotype 2 (rAAV2) vectors using HSV-1 amplicons expressing AAV2 Rep and Cap in combination with rHSV-1 helper vectors has been described, 8 and this protocol was modified and further optimized by several groups using coinfection of two rHSV-1 vectors, both replication-deficient infected-cell protein ICP27-mutants, one carrying rAAV provirus and a second bearing AAV2 Rep-cap genes, [9][10][11] The use of rHSV-1 vectors has been historically limited by their relatively low titers and the rAAV vector yields in the rHSV-based manufacturing, thus would be affected by the titers of rHSV-1 helper. 12,13 Therefore, several methods to improve rHSV-1 yield have been studied, i.e., changing rHSV-1 propagation conditions [12][13][14] or using anti-inflammatory compounds known to inhibit the host defense mechanism, like dexamethasone or valproic acid. 15,16 In our search for compounds that would increase HSV-1 yield, we investigated another anti-inflammatory compound, aurintricarboxylic acid (ATA). ATA is a heterogeneous mixture of polymers accredited with a continuously growing number of biological activities. [17][18][19][20] ATA is mostly known as an antiviral agent to several viruses like HIV, herpesvirus HHV-7, SARS-CoV, and others. [20][21][22][23] However, ATA did not block the replication of adenovirus type 5 24 and has been reported to increase adenovirus type 5 titer in human embryonic kidney (HEK)-293 cells. 25 Here, we investigate the effects of ATA on HSV-1 vector production yield in V27, Vero, and HEK-293 cells. We further tested the rHSV-1 stocks produced in the presence of ATA in the HSV-1-based rAAV manufacturing protocol. RESULTS ATA effect on HSV-1 yield in V27 cells To test whether ATA could increase the yield of rHSV-1 d27-1 (d27-1) vector in V27 cells, ATA was applied to the media during the infection (ATA@step1) or dilution steps (ATA@step2) (Figure 1a). Interestingly, ATA treatment delayed HSV-1 plaque formation or cell lysis in V27 cell monolayers. Cytopathic effect at the time of harvest, 72 hours postinfection was between 20 and 60% as compared with 100% cytopathic effect in the absence of ATA (data not shown). To determine which concentrations and conditions for ATA addition have impact on HSV DNase resistant particles per milliliter (DRP/ml) and plaque-forming units per milliliter (PFU/ml) titers, ATA was added at varying concentrations (0-60 µmol/l ATA) either in six-well plates in the first step (ATA@step1) (Figure 1b) or in T150 flasks in both steps (ATA@step1 and ATA@step2) ( Figure 1c). Both protocols show HSV yield increase, and the optimal conditions would be either with 50 µmol/l ATA during the infection step (50 µmol/l ATA@step1) that is further diluted to the final concentration of 20 µmol/l ATA or by adding ATA at dilution step (20 µmol/l ATA@step2) to achieve the final concentration of 20 µmol/l ATA (Figure 1b,c). In this example, the supernatant titers were 2.7 ± 0.1 × 10 8 DRP/ml and 8.1 ± 0.7 × 10 7 PFU/ml in 50 µmol/l ATA@ step1 protocol, 2.0 ± 0.7 × 10 8 DRP/ml and 7.4 ± 1.4 × 10 7 PFU/ml in 20 µmol/l ATA@step2 protocol, as compared with 7.3 ± 0.6 × 10 7 DRP/ml and 2.9 ± 0.6 × 10 7 PFU/ml of untreated control (0 µmol/l ATA) (Figure 1c). Results are expressed as means ± SD from two independent experiments (n = 2). Importance of serum presence in ATA-HSV protocol We tested the effect of serum-free media on DRP/ml and PFU/ml HSV-1 titers in the above-described ATA-HSV protocol, which used 10% fetal bovine serum (FBS) (Figure 2). In this example of ATA@ step2 protocol, serum-free media resulted in d27-1 titer reduction from 3.2 ± 0.1 × 10 8 DRP/ml and 5.4 ± 0.3 × 10 7 PFU/ml (10% FBS) to 1.1 ± 0.2 × 10 7 DRP/ml (0% FBS), where PFU/ml titer value was below the detection limit ( Figure 2). Even more dramatic effect of serumfree media was seen in the ATA@step1 protocol when both DRP/ml and PFU/ml titer values were below the detection limit (data not shown). Effect of residual ATA in HSV stocks on the production of rAAV virions Because our ultimate goal is rAAV production, we investigated whether the presence of ATA in rHSV-1 stocks would influence rAAV yields. The rAAV-GFP vector was produced by coinfection of rHSV-rep2/cap2 and rHSV-EGFP vectors in HEK-293 cells using rHSV-1 stocks not containing ATA, or rHSV-1 stocks prepared under ATA@step1 protocol (see Materials and Methods). The effect of residual ATA in rHSV-rep2/cap2 stock (~3 µmol/l) resulted in a small 1.3-fold (statistically significant) increase in rAAV titer to 3.8 × 10 10 ± 2.4 × 10 9 DRP/ml (**P < 0.01) when compared with the rAAV DRP/ml titer made by using naive rHSV-1 stocks, 2.8 × 10 10 ± 2.8 × 10 9 DRP/ml (n = 4). As shown in Figure 4, ATA also slightly increased rAAV yield (DRP/cell) when 10 µmol/l ATA was spiked directly into HEK-293 cell media during the rHSV-1 coinfection step. This effect was not observed when the ATA concentrations were higher than 15 µmol/l. DISCUSSION In this study, we have shown for the first time that micromolar concentrations of ATA increase titers of certain HSV-1 strains in various cell lines. In rHSV-1-based rAAV manufacturing protocol, the yield of rAAV is limited by the maximal titer of helper rHSV-1 vectors, and in addition, the presence of residual ATA in rHSV-1 stocks did not negatively influence rAAV yield. Thus, these findings are important both for large-scale rHSV-1 vector production and rAAV vector production. Previously, several groups were investigating possibilities to increase HSV-1 titers by changing the propagation conditions or by using reagents capable of decreasing host defense. 13-16 HSV-1 is known to both induce and partially evade host antiviral responses. 26 Both dexamethasone and valproic acid have been shown to increase HSV-1 yield by either inhibition of cellular defense against viral propagation or induction of interferon (IFN)-responsive antiviral genes. 15,16 ATA was reported to reduce inducible nitric oxide synthase (iNOS) expression, to inhibit JAK-STAT signaling, or to prevent IFNmediated transcriptional activation. 27,28 ATA has also previously been shown to increase adenovirus type 5 titer in HEK-293 cells. 25 HSV-1 infection activates innate immune system by inducing intracellular signaling pathways that lead to the expression of proteins with proinflammatory and microbicidal activities, including cytokines and IFNs. [29][30][31] IFN signaling is one of the most important cellular defense mechanism for viral clearance; 32,33 however, both Vero and HEK-293 cells have a dysfunctional intracellular antiviral signaling pathways. [34][35][36][37][38] Vero cells have inability to produce IFN-β, while HEK-293 cells do not produce IFN-α, which can explain their permissive nature to viral production. [34][35][36][37][38] ATA is known as an activator of Raf/MEK/MAPK pathway, insulinlike growth factor-1 receptor, and protein kinase C signaling. 39,40 It has been reported that ATA has a survival-promoting effect transducted via activation of the insulin-like growth factor-1 receptor signaling pathway and Akt, MAP, and p42/p44 mitogenactivated protein kinases (Erk-1 and -2). 39, 41 We have observed that ATA treatment delayed HSV-1 plaque formation, cell lysis in V27 cell monolayers, and CPE, which may suggest its antiapoptotic properties. ATA has also been shown to prevent apoptotic cell death in a variety of cell types including breast cancer MDA-231 and MCF-7 cells, macrophage RAW 264.7 cells, and rat pheochromocytoma PC12 cells. 28,[41][42][43] Interestingly, ATA in millimolar and higher concentrations is known as an antiviral agent. [20][21][22][23] In Vero, V27, and HEK-293 cells, we Figure 2 The importance of fetal bovine serum (FBS) in ATA-HSV protocol. Following experiments were conducted to determine optimal conditions, and the effect of the presence or absence of 10% FBS on HSV titer in the ATA@step2 protocol. In this example, 20 µmol/l ATA and 10% FBS were either added or omitted in step 2 (ATA@step2). When ATA was added without presence of 10% FBS, the DNase resistant particles per milliliter (DRP/ml) titer of HSV-1 d27-1 vector was reduced, and the plaque-forming units per milliliter (PFU/ml) titer was below detection limit. The HSV-1 d27-1 titers (d27-1) are expressed as mean values + SD of DRP/ml, shown as black bars, and as mean values + SD of PFU/ml, shown as white bars. Results are representative of two independent experiments (n = 2) and are expressed as mean + SD. ATA, aurintricarboxylic acid; HSV, Herpes simplex virus. find that HSV-1 titer actually increases when ATA was at micromolar concentrations; however, a possible antiviral effect was observed when ATA was added into serum-free media. Because ATA, being a polycarboxylate, would bind by electrostatic interactions to any protein that contain positively charged residues given the myriad of possible interaction sites, it was considered as a nonspecific enzyme inhibitor. 20,44 To establish an exact mechanism of action of ATA in HSV-1 yield increase, therefore, would simply remain unknown at this point. Our findings that ATA increases HSV-1 yield has important implications both for large-scale rHSV-1 production and rAAV vector production. HSV-1 production The ICP27-deficient vectors d27-1, rHSV-rep2/cap2, and rHSV-EGFP strains were propagated in ICP27-complementing V27 cell line. The wtHSV-1 KOS strain and wtHSV-1 McIntyre strain were propagated in Vero or HEK-293 cell lines. ATA was applied to the media during the infection (ATA@step1) or dilution step (ATA@step2) (Figure 1a). The HSV-1 infection at multiplicity of infection of 0.15 (typically 6 × 10 5 cells in a six-well plate) was performed in 40% (2/5 vol) of the total final media volume for 1-2 hours, and the remaining media (60% or 3/5 of the total final volume) was added during the dilution step. The cells were then incubated for 72 hours and supernatant harvested to perform assays to obtain titers in DRP/ml and PFU/ml. Infectious vector particles were harvested 72 hours postinfection by collecting the culture supernatant. The titers of HSV-1 stocks in DRP/ml were determined by Taqman assay. Viral genomes within crude culture medium were quantified via treatment in the presence of DNase I (Promega, Madison, WI) (50 U/ ml final concentration) at 37 °C for 60 minutes, followed by digestion with proteinase K (Invitrogen Life Technologies) (1 U/ml final concentration) at 50 °C for 60 minutes, and then denatured at 95 °C for 30 minutes. Linearized plasmid pZero 195 UL36, obtained from Applied Genetic Technologies Corporation, was used to generate standard curves. The primer-probe set was specific for the vector genome UL36 sequence (HSV-UL36 F: 5′-GTTGGTTATGGGGGAGTGTGG, HSV-UL36 R: 5′-TCCTTGTCTGGGGTGTCTTCG, and HSV-UL36 Probe: 5′-6FAM -CGACGAAGACTCCGACGCCACCTC-TAMRA). Amplification of the polymerase chain reaction (PCR) product was achieved with the following cycling parameters: 1 cycle at 50 °C for 2 minutes, 1 cycle at 95 °C for 10 minutes; 40 cycles at 95 °C for 15 seconds, and 40 cycles at 60 °C for 60 seconds. The results were expressed as a mean of rHSV DRP/ml titers ± SD and were statistically analyzed by Prism 5.0d GraphPad Software (GraphPad Software, La Jolla, CA). The titers of HSV-1 stocks in PFU/ml were also determined within crude culture medium. PFU/ ml were quantified by serial dilutions in DMEM 10% FBS, 1% penicillin/ streptomycin, and treatment to either V27 or HEK-293 cells. At 6 hours posttreatment, infectious media content was adjusted to DMEM/10% FBS/1% PenStrep/0.2% γ-globulins by spiking an appropriate 4% (w/v) γ-globulin in phosphate-buffered saline solution into DMEM/10% FBS/1% PenStrep. At 72 hours postinfection, media is removed, and plates dried and then fixed. Cell monolayers are treated with horse radish peroxidase-conjugated α-HSV antibody (Dako, Carpinteria, CA) followed by Vector VIP Peroxidase Substrate Kit (Vector Labs, Burlingame, CA) for enumeration and PFU titer calculation. The results were expressed as a mean of rHSV PFU/ml titers ± SD and were statistically analyzed by Prism 5.0d GraphPad Software. rAAV production HEK-293 cells (2.5 × 10 6 cells) were simultaneously coinfected with both rHSV-rep2/cap2 and rHSV-EGFP vectors as described by Kang et al. 11 At 2-4 hour postinfection, infectious medium was exchanged with DMEM + 10% FBS equivalent to double the preinfection culture volume. At the time of harvest, the cell pellet was frozen at −80 °C. DRP titers were quantified by real-time PCR in a 96-well block Applied Biosystems 7500 Fast Real-Time PCR System (Applied Biosystems Life Technologies, Grand Island, NY). Crude samples were subjected to three cycles of freezing and thawing, then incubated in the presence of 250 U/ml of Benzonase Endonuclease (EMD Millipore, Billerica, MA) in 2 mmol/l MgCl 2 , 1% final concentration protein grade Tween 80, and incubated at 37 °C for 60 minutes, followed by 0.25% Gibco Trypsin (Invitrogen Life Technologies) digestion at 50 °C for 60 minutes. Finally, treatment with DNase I (50 U/ ml final concentration) at 37 °C for 30 minutes was performed and then denaturation at 95 °C for 20 minutes. Linearized plasmid pDC67/+SV40 (Genzyme, Framingham, MA) was used to generate standard curves. The primer-probe set was specific for the simian virus 40 (SV40) poly (A) sequence: rAAV-F: 5′-AGCAATAGCATCACAAATTTCACAA-3′, rAAV-R: 5′-GCAGACATGATAAGATACATTGATGAGTT-3′, and rAAV-Probe: 5′ 6-FAM-AGCATTTTTTTCACTGCATTCTAGTTGTGGTTTGTC-TAMRA-3′. Amplification of the PCR product was achieved with the following cycling parameters: 1 cycle at 50 °C for 2 minutes, 1 cycle at 95 °C for 10 minutes; 40 cycles at 95 °C for 15 seconds, and 40 cycles at 60 °C for 60 seconds. The results were expressed as a mean of rAAV DRP/ml titers ± SD and were statistically analyzed by Prism 5.0d GraphPad Software. ACKNOWLEDGMENTS We thank Jennifer Tousignant for valuable comments on the manuscript. Figure 4 Effect of aurintricarboxylic acid (ATA) residues in herpes simplex virus-1 (HSV-1) stocks on the production or recombinant adenoassociated virus (rAAV) virions. The rAAV-GFP vector was produced by coinfection of rHSV-rep2/cap2 and rHSV-EGFP vectors in human embryonic kidney (HEK)-293 cells 60-mm plates. ATA was shown to slightly increase rAAV yields (DNase resistant particles (DRP) per cell) when 10 µmol/l ATA was spiked directly into HEK-293 cell media during 2 hours of HSV coinfection step. This effect was not observed when the ATA concentrations in HEK-293 cells during 2 hours of HSV coinfection were higher than 15 µmol/l. Results are representative of two independent experiments (n = 2) and are expressed as mean + SD of rAAV (DRP/cell. eGFP, enhanced GFP, GFP, green fluorescent protein.
2018-04-03T03:29:13.149Z
2014-02-19T00:00:00.000
{ "year": 2014, "sha1": "9ddc6eddec3c91f53a24653e004620fad3a694ec", "oa_license": "CCBYNCND", "oa_url": "http://www.cell.com/article/S2329050116300717/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9ddc6eddec3c91f53a24653e004620fad3a694ec", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
7605038
pes2o/s2orc
v3-fos-license
A Maximum Entropy Framework that Integrates Word Dependencies and Grammatical Relations for Reading Comprehension Automatic reading comprehension (RC) systems can analyze a given passage and generate/extract answers in response to questions about the passage. The RC passages are often constrained in their lengths and the target answer sentence usually occurs very few times. In order to generate/extract a speci(cid:2)c precise answer, this paper proposes the integration of two types of (cid:147)deep(cid:148) linguistic features, namely word dependencies and grammatical relations, in a maximum entropy (ME) framework to handle the RC task. The proposed approach achieves 44.7% and 73.2% HumSent accuracy on the Reme-dia and ChungHwa corpora respectively. This result is competitive with other re-sults reported thus far. Introduction Automatic reading comprehension (RC) systems can analyze a given passage and generate/extract answers in response to questions about the passage. The RC passages are often constrained in their lengths and the target answer sentence usually occurs only once (or very few times). This differentiates the RC task from other tasks such as open-domain question answering (QA) in the Text Retrieval Conference (Light et al., 2001). In order to generate/extract a specific precise answer to a given question from a short passage, "deep" linguistic analysis of sentences in a passage is needed. Previous efforts in RC often use the bag-of-words (BOW) approach as the baseline, which is further augmented with techniques such as shallow syntactic analysis, the use of named entities (NE) and pronoun references. For example, Hirschman et al. (1999) have augmented the BOW approach with stemming, NE recognition, NE filtering, semantic class identification and pronoun resolution to achieve 36% HumSent 1 accuracy in the Remedia test set. Based on these technologies, Riloff and Thelen (2000) improved the HumSent accuracy to 40% by applying a set of heuristic rules that assign handcrafted weights to matching words and NE. Charniak et al. (2000) used additional strategies for different question types to achieve 41%. An example strategy for why questions is that if the first word of the matching sentence is "this," "that," "these" or "those," the system should select the previous sentence as an answer. Light et al. (2001) also introduced an approach to estimate the performance upper bound of the BOW approach. When we apply the same approach to the Remedia test set, we obtained the upper bound of 48.3% HumSent accuracy. The state-of-art performance reached 42% with answer patterns derived from web (Du et al., 2005). This paper investigates the possibility of enhancing RC performance by applying "deep" linguistic analysis for every sentence in the passage. We refer to the use of two types of features, namely word dependencies and grammatical relations, that are integrated in a maximum entropy framework. Word dependencies refer to the headword dependencies in lexicalized syntactic parse trees, together with part-of-speech (POS) information. Grammatical relations (GR) refer to linkages such as subject, object, modifier, etc. The ME framework has shown its effectiveness in solving QA tasks (Ittycheriah et al., 1994). In comparison with previous approaches mentioned earlier, the current approach involves richer syntactic information that cover longer-distance relationships. Corpora We used the Remedia corpus (Hirschman et al., 1999) and ChungHwa corpus (Xu and Meng, 2005) in our experiments. The Remedia corpus contains 55 training stories and 60 testing stories (about 20K words). Each story contains 20 sentences on average and is accompanied by five types of questions: who, what, when, where and why. The ChungHwa corpus contains 50 training stories and 50 test stories (about 18K words). Each story contains 9 sentences and is accompanied by four questions on average. Both the Remedia and ChungHwa corpora contain the annotation of NE, anaphor referents and answer sentences. The Maximum Entropy Framework Suppose a story S contains n sentences, C 0 , . . . , C n , the objective of an RC system can be described as: (1) Let "x" be the question (Q) and "y" be the answer sentence C i that answers "x". Equation 1 can be computed by the ME method (Zhou et al., 2003): For a given question Q, the C i with the highest probability is selected. If multiple sentences have the maximum probability, the one that occurs the earliest in the passage is returned. We used the selective gain computation (SGC) algorithm (Zhou et al., 2003) to select features and estimate parameters for its fast performance. Features Used in the "Deep" Linguistic Analysis A feature in the ME approach typically has binary values: f j (x, y) = 1 if the feature j occurs; otherwise f j (x, y) = 0. This section describes two types of "deep" linguistic features to be integrated in the ME framework in two subsections. POS Tags of Matching Words and Dependencies Consider the following question Q and sentence C, Q: Who wrote the "Pledge of Allegiance" C: The pledge was written by Frances Bellamy. The set of words and POS tags 2 are: Q: {write/VB, pledge/NN, allegiance/NNP} C: {write/VB, pledge/NN, by/IN, Frances/NNP, Bellamy/NNP}. Two matching words between Q and C (i.e. "write" and "pledge") activate two POS tag features: f V B (x, y)=1 and f N N (x, y)=1. We extracted dependencies from lexicalized syntactic parse trees, which can be obtained according to the head-rules in (Collins, 1999) (e.g. see Figure 1). In a lexicalized syntactic parse tree, a dependency can be defined as: < hc → hp > or < hr → T OP >, where hc is the headword of the child node, hp is the headword of the parent node (hc = hp), hr is the headword of the root node. Sample Figure 2. The dependency trees produced by MINI-PAR for a question and a candidate answer sentence. dependencies in C (see Figure 1) are: <write→TOP> and <pledge→write>. The dependency features are represented by the combined POS tags of the modifiers and headwords of (identical) matching dependencies 3 . A matching dependency between Q and C, <pledge→write> activates a dependency feature: f N N −V B (x, y)=1. In total, we obtained 169 and 180 word dependency features from the Remedia and ChungHwa training sets respectively. Matching Grammatical Relationships (GR) We extracted grammatical relationships from the dependency trees produced by MINIPAR (Lin, 1998), which covers 79% of the dependency relationships in the SUSANNE corpus with 89% precision 4 . IN a MINIPAR dependency relationship: (word1 CATE1:RELATION:CATE2 word2), CATE1 and CATE2 represent such grammatical categories as nouns, verbs, adjectives, etc.; RELA-TION represents the grammatical relationships such as subject, objects, modifiers, etc. 5 Figure 2 shows dependency trees of Q and C produced by MINI-PAR. Sample grammatical relationships in C are pledge N:det:Det the, and write V:by-subj:Prep by. GR features are extracted from identical matching relationships between questions and candidate sentences. The only identical matching relationship between Q and C, "write V:obj:N pledge" activates a grammatical relationship feature: f obj (x, y)=1. In total, we extracted 44 and 45 GR features from the Remedia and ChungHwa training sets respectively. 3 We extracted dependencies from parse trees generated by Collins' parser (Collins, 1999 Experimental Results We selected the features used in Quarc (Riloff and Thelen, 2000) to establish the reference performance level. In our experiments, the 24 rules in Quarc are transferred 6 to ME features: "If contains(Q,{start, begin}) and contains (S,{start, begin, since, year}) Then Score(S)+=20" → f j (x, y) = 1 (0< j <25) if Q is a when question that contains "start" or "begin" and C contains "start," "begin," "since" or "year"; f j (x, y) = 0 otherwise. In addition to the Quarc features, we resolved five pronouns (he, him, his, she and her) in the stories based on the annotation in the corpora. The result of using Quarc features in the ME framework is 38.3% HumSent accuracy on the Remedia test set. This is lower than the result (40%) obtained by our re-implementation of Quarc that uses handcrafted scores. A possible explanation is that handcrafted scores are more reliable than ME, since humans can generalize the score even for sparse data. Therefore, we refined our reference performance level by combining the ME models (MEM) and handcrafted models (HCM). Suppose the score of a question-answer pair is score(Q, Ci), the conditional probability that C i answers Q in HCM is: HCM (Q, Ci) = P (Ci answers Q|Q) = score(Q,C i ) Σ j≤n score(Q,C j ) . We combined the probabilities from MEM and HCM in the following manner: To obtain the optimal α, we partitioned the training set into four bins. The ME models are trained on three different bins; the optimal α is determined on the other bins. By trying different bins combinations and different α such that 0 < α < 1 with interval 0.1, we obtained the average optimal α = 0.15 and 0.9 from the Remedia and ChungHwa training sets respectively 7 . Our baseline used the combined ME models and handcrafted models to achieve 40.3% and 70.6% HumSent accuracy in the Remedia and ChungHwa test sets respectively. We set up our experiments such that the linguistic features are applied incrementally -(i) First , we use only POS tags of matching words among questions and candidate answer sentences. (ii) Then we add POS tags of the matching dependencies. (iii) We apply only GR features from MINIPAR. (iv) All features are used. These four feature sets are denoted as "+wp," "+wp+dp," "+mini" and "+wp+dp+mini" respectively. The results are shown in Figure 3 for the Remedia and ChungHwa test sets. With the significance level 0.05, the pairwise ttest (for every question) to the statistical significance of the improvements shows that the p-value is 0.009 and 0.025 for the Remedia and ChungHwa test sets respectively. The "deep" syntactic features significantly improve the performance over the baseline system on the Remedia and ChungHwa test sets 8 . Conclusions This paper proposes the integration of two types of "deep" linguistic features, namely word dependencies and grammatical relations, in a ME framework to handle the RC task. Our system leverages linguistic information such as POS, word dependencies and grammatical relationships in order to extract the appropriate answer sentence for a given question from all available sentences in the passage. Our system achieves 44.7% and 73.2% HumSent accuracy on the Remedia and ChungHwa test sets respectively. This shows a statistically significant improvement over the reference performance levels, 40.3% and 70.6% on the same test sets.
2014-07-01T00:00:00.000Z
2006-06-04T00:00:00.000
{ "year": 2006, "sha1": "8ea9ae26b1f2b50557e395886e5d5b56d411b653", "oa_license": null, "oa_url": "http://dl.acm.org/ft_gateway.cfm?id=1614096&type=pdf", "oa_status": "BRONZE", "pdf_src": "ACL", "pdf_hash": "5a4b915360dd2f553815d79dd5ce05f9897408e0", "s2fieldsofstudy": [ "Computer Science", "Linguistics" ], "extfieldsofstudy": [ "Computer Science" ] }
229377476
pes2o/s2orc
v3-fos-license
Removal of Cadmium from Simulated Wastewaters Using a Fixed Bed Bioelectrochemical Reactor I n this research, the removal of cadmium (Cd) from simulated wastewater was investigated by using a fixed bed bio-electrochemical reactor. The effects of the main controlling factors on the performance of the removal process such as applied cell voltage, initial Cd concentration, pH of the catholyte, and the mesh number of the cathode were investigated. The results showed that the applied cell voltage had the main impact on the removal efficiency of cadmium where increasing the applied voltage led to higher removal efficiency. Meanwhile increasing the applied voltage was found to be given lower current efficiency and higher energy consumption. No significant effect of initial Cd concentration on the removal efficiency of cadmium but increasing the initial concentration would be given higher current efficiency and lower energy consumption. The results established that using a pH value lower than three results in a sharp decrease in the removal efficiency as well as using a pH value higher than seven results in decreasing the removal efficiency. Using a mesh number higher than 30 gave a lower removal efficiency. The best operating conditions were found to be an applied potential of 1.8 V, an initial Cd concentration of 125 ppm, and a pH of 7. Under these operating conditions with the using a stack of stainless with mesh number 30 as a packed bed cathode, a complete removal efficiency of Cd(100%) was obtained at a current efficiency of 83.57% and energy consumption of 0.57 kWh/kg Cd. INTRODUCTION Pollution of the environment by heavy metals such as cadmium, lead, cobalt, nickel, zinc, and copper is a serious environmental and health hazard since these metals are toxic and nonbiodegradable. They have the affinity to bio-accumulate through the food chain even at low concentrations leading to many diseases and disorders (Amarasinghe and Williams, 2007; Choi, et al., 2014). Among these heavy metals, cadmium (Cd) is highly toxic to humans and has an extremely long biological half-life (greater than 20 years). The harmful effects of cadmium involve acute and chronic metabolic disorders, such as emphysema, renal damage, hypertension, and testicular atrophy (Choi, et al., 2014). Various industrial processes, such as smelting and refining of nonferrous metals, battery manufacturing, electroplating, and inorganic pigment industry, result in cadmium contamination in wastewater streams (Kurniawan, et al., 2006). Hence, the development of effective methods for removing of cadmium from wastewaters is an essential task with regards to the protection of public health and environment. Traditional methods for cadmium removing involve physical, chemical, biological and electrochemical methods (Soares and Soares, 2011; Malaviya and Singh, 2011). The physical and chemical methods, such as adsorption, chemical precipitation, reverse osmosis, ion exchange, and membrane filtration, can be unsuccessful or may be very expensive, especially if the concentration of cadmium is below 100 mg/L (Ahluwalia and Goyal, 2007). For example, ion exchange needs a large quantity of chemicals for the regeneration of resin, while membrane processes are susceptible to fouling, causing high operation costs (Kurniawan, et al., 2006). Traditional biological processes are considered as an alternative strategy to physical-chemical processes for cadmium removing via different mechanisms such as bio-sorption, enzymatic reduction, bio-mineralization, and precipitation (Bai, et al., 2008;Pagnanelli, et al., 2010). However, these traditional biological processes suffer from subsequent more sludge generation, and more organic carbon consumption (Pagnanelli, et al. 2010). Electrochemical processes can be removed Cd efficiently with no organic carbon consumption and no sludge production, but these methods required using intensive energy with high capital cost and relatively low efficiency at dilute concentration (Khairy, et al., 2014). Hence, providing environmentally friendly and cost-effective method for cadmium removing with less sludge generation and lower energy demands still remain a challenge. Microbial fuel cells (MFCs) and microbial electrolysis cells (MECs) are considered as promising technologies to accomplish sustainable wastewater treatment with simultaneous value-added products and clear energy generation (Zhang, et al., 2015). MFCs have been used for recovery of various metals including chromium (Li, et al., 2008), copper (Heijne, et al., 2010), iron (Lefebvre, et al., 2013), vanadium (Zhang, et al., 2009), and selenium (Catal, et al., 2009) by using a two-chamber design where these metals are removed in the anaerobic cathode chamber through cathodic metal reduction. In contrast, organics in the anodic chamber were used as carbon sources and electron donors (Abourached, et al., 2014). By applying an external voltage, MECs have been used to recovery heavy metals such as lead, cadmium, cobalt, and zinc (Jiang, et al., 2014). In these MECs, exoelectrogenic bacteria oxidize organic substances at the bio anode of the cell while water is simultaneously reduced to hydrogen gas at the cathode in combination with the reduction of heavy metal ions into metallic solids (Logan, et al., 2008). Thus, wastewater can be treated while energy is recovered in the form of hydrogen gas. In the MECs, bacteria cultivate as a bio-layer on the surface of the anode then perform an oxidation process for the organic substances existing in the anode chamber such as acetates converting them to CO2 and H2O, with generating of electrons at the same time upon the surface of the anode. Therefore, the success of MEC is influenced by the enrichment of bacteria on the surface of anode. The source of bacteria used in MECs may be coming from sewage sludge, wastewater, and soils (Lee, et al , with about 60% in the Geobacteraceae family. However, different electricity generating abilities can be obtained from different sources of soils that contain different microbial communities. Hence, the efficiency of MEC depends on the type of soil that has been used in the anodic chamber (Schamphelaire, et al., 2010). During the last two decades, the application of the electrochemical technology in wastewater treatment has been increased due to the development of a three-dimensional electrode (Ismail, et al., 2013). The main benefit of this electrode was the high mass transfer rate as well as the high specific surface area. Removal of heavy metals by three-dimensional electrodes was achieved using different configurations such as those based on carbon or metal particles ( In the present work, we investigated the removal of cadmium using a new design of MEC composed of a fixed bed of parallel stainless steel screens as a cathode and porous graphite as an anode with the using locally soil material as a source of bacteria. The effects of operating parameters, such as applied cell voltage, initial Cd concentration, pH of the catholyte, and mesh size of the screen on the performance of the MEC were investigated. To the best of author knowledge, no such bio-electrochemical system has been used for the removal of cadmium. The choice of stainless steel screens as cathode material is based on the observations of previous works which confirmed that this material gives a higher performance in the production of hydrogen by MEC systems at lower cost (Zhang, et al., 2010) Materials and methods 2.1 Characterization of soil and electrodes A sampling of soil was performed at an area located near al-Ghwarizm College at the University of Baghdad, Al-Jadriya, Iraq. Samples of soil were taken at 0.1 m deep from the surface of the soil. The samples were subjected to screening using a sieve of 2 mm in diameter then storied at 4 ºC for two weeks before using. Analyzing the samples of soil for determining their physiochemical properties was performed via routine methods (Page, et al., 1982). Briefly, the maximum water holding capacity (MWHC) of the soil was calculated by considering the difference in weights between dry and soaked soil samples. The pH of the soil was measured by taking a ratio (1:2.5) of soil to water, while the electrical conductivity of the soil was determined by taking a ratio (1:5) of soil to water. Diagnose the types of bacteria in the soil was performed using VITEK 2 compact system (bioMérieux, France) by following the procedure provided by this system. In this procedure, three to five well-isolated colonies were transferred to a glass tube inclosing 3 ml distilled water, then adjusting the turbidity to represent a bacterial cell count per 1 ml equal to 0.5 OD by DensiCHEK Plus. Samples were then put into the VITEK 2 compact system machine to transfer the bacterial suspension to a cassette by negative pressure, and the cassettes were incubated to complete a biochemical reaction within 12 h. the software of the VITEK 2 compact system was used to interpret the results. The X-ray diffractometer (XRD) technique (Philips Analytical X-Ray B.V. with PC-APD, diffraction software, Philips expert, Holland) was used to determine the soil structure. XRD system was operated at 40 kV and 30 mA with CuKα radiation as the X-ray source, λ=1.54056 Å. The scan step time was 0.5 s with a step size of 0.02° and a scan range 10-99.99°. Stainless steel 316-AISI screens were used as cathode materials having different mesh sizes 30, 40, and 60 in -1 .The porosity of screens and their specific surface area were determined using Eqs. 1 and 2, respectively (Sioda, 1976): Journal of Engineering Volume 26 December 2020 Number 12 114 where (ε) is the porosity, (ms/as) is the weight /area density (g/cm 2 ),(ρs) is the density of stainless steel 316-AISI (8.027 g/cm 3 ) (Green and Perry, 2008), (l ) is the thickness of the screen (cm), l=2d,( r) is the ratio of surface to volume of the wire forming the screen (cm -1 ), r =4/d, and (s) is the specific surface area (cm -1 ). The type of screen woven was recognized by an Olympus BX51M with DP70 digital camera system while the diameter of the wire (d) was determined using a digital calliper. The anode material was rectangular porous graphite (59 × 59 × 15 mm) with 20-26% porosity supplied by Tokai Carbon Co., Ltd. The same X-ray diffractometer (XRD) mentioned previously was used to determine the structure of porous graphite. Scanning electron microscopy (SEM) (Tescan Mira3 FESEM, France) was used to identify the topography of the graphite surface. The SEM system was operated at AV = 15 kV, bias = 0, spot = 3.0 and HV = 2 kV, bias = 1400 V. BET method using BET Tavana, Iran which based on the software of micrometrics(MicroActive for TriStar II plus 2.03) was used to determine the specific surface area of the graphite. 2.2 Bio-electrochemical system The fixed bed bio-electrochemical system adopted in the present work comprises from a fixed bed electrochemical cell, two reservoirs for the anolyte and catholyte solutions (each one is 1 L conical Pyrex flask), two dosing recirculation pumps (IML, HC-100, Italy) having a flow rate range 5-8 l/h, and two calibrated flowmeters at a flow rate range 0 -0.25 l/min. Fig.1-a shows a schematic diagram of the system, while Fig.1-b displays the picture of the system. This configuration is known as a batch recycle mode which permits the recirculation of the anolyte and catholyte in two separate loops through the reactor. Fig. 2 illustrates the design of the bio-electrochemical cell. It is essentially a rectangular Perspex electrolytic cell composed of two chambers. The first is the anodic chamber with dimensions 140 mm length × 100 mm width × 25 mm thickness while the second is the cathodic chamber having external dimensions 140 mm length × 100 mm width × 20 mm thickness. Cationic membrane (IONIC-64LMR) supported on both sides by PTFE perforated plates with 2 mm thickness was used to separate the anodic and cathodic chambers from each other. The cathode chamber consists of two cavities: an internal cavity with dimensions (60 mm × 60 mm × 2 mm) in which a stainless steel plate (59 mm × 59 mm × 2 mm) was fixed to act as a current feeder, and an external cavity with dimensions (60 mm × 60 mm × 5 mm) in which stack of seven stainless steel screens (each 59 mm × 59 mm) was held to act as a fixed bed cathode. Providing the electrical current to each electrode was made by screw connectors passing through the walls of the cell. The anodic chamber consists of an internal cavity with dimensions 60 mm × 60 mm ×18 mm in which a porous graphite block (59 mm × 59 mm ×15 mm) was fixed and acts as an anode. For increasing the contact surface area of the anode, grooves were made on its surface lengthily. The anolyte used in the experiments consists of (per liter): CH3COONa 1 g, NaH2PO4·H2O 2.45 g, Na2HPO4 4.58 g, KCl 0.13 g, NH4Cl 0.31 g, adjusted to pH = 7 (Luo, et al., 2014). Catholyte was CdCl2 solution at the required concentration and adjusted to the desired pH via the addition of HCl or NaOH. The cathode was provided with a stack of stainless steel screens at the required mesh size while the anode was provided with 2 g of soil with spreading it on the anode surface. The cell was assembled after inserting a portion of anolyte with closing the inlet and outlet sections of the anode, then brought the cell to a -0.3 bar moisture potential then incubated at room temperature for three days for ensuring biofilm cultivation on the surface of the anode. After this, the boi-electrochemical cell was connected with the flow system. Before starting any run, the anolyte and catholyte were pumped through the cell for one hour with no connecting to power supply for activating of bacteria in the soil; then the required voltage was applied to the circuit using a DC Power Supply (UNI-T: UTP3315TF-L, China) by connecting the negative lead of the power source in series with a 10 Ω resistor to the cathode and the positive lead to the anode. The electrochemical system was run at a temperature of 25 ± 2 ºC. Samples were taken though each run every 10 min for the first hour, then every 30 min for the second hour, and finally every hour until the end of the electrolysis at 6 hours. The concentration of Cd (II) was measured by atomic absorption spectroscopy (Varian SpectrAA 200 spectrometer). After each experiment, the cathode was replaced with a new stack of screens and the cathodic chamber was provided with a fresh catholyte. The medium in the anodic chamber was changed with a new anolyte medium, and the anode was provided with a new sample of soil. For studying the effect of the applied cell voltage, values of 0.6, 0.9, 1.2, 1.5, and 1.8 V were applied. The impact of the initial concentration was examined at values of 25, 50, 75, 100, and 125 ppm at a constant applied voltage and pH. The impact of pH was studied at values of 1, 3, 5, 7, and 9. Analysis and calculations Cadmium removal efficiency (RE %) was calculated using Equation 3 (Modin, et al., 2017) : where Ci is the initial Cd concentration (ppm) and Cf is the final Cd concentration (ppm) after a period of electrolysis time (∆t). Current efficiency (CE%) of the cathodic reactions can be defined as the fraction of electrical current (used for metal ion reduction) to the total current provided during the electrolysis. . where nCd is the quantity of Cd deposited on the cathodic surface (mmol), and zCd is the number of electrons required to reduce Cd 2+ (2 mol(e -)mol -1 Cd). I is the current (mA), t is time (s), and F is Faraday's constant (96485.3 C mol -1 (e -)). The specific energy consumption EC (kWh kg -1 Cd) was evaluated based on the amount of cadmium deposition (Equation 5 where E is the applied cell voltage (V) and MW is the molecular weight of Cd (112.414 g mol -1 ). Table 1. XRD results of the soil samples. Cathode Stainless steel 316-AISI screens with the following chemical composition of (in wt %): Cr-16.7, Ni-12.2, Mo-2.1, Mn-1.32, Si-0.56, P-0.03, C-0.022, S-0.012, Cu-0.26, (Fe = balance) was used as a cathode. The properties of these screens are depicted in Table 2 where the specific surface area of these screens is larger than 38 cm -1 and their porosities are higher than 0.6. Fig. 4 illustrates the image of three types of screens based on the woven types. Fig. 5 illustrates the XRD results of the porous graphite anode. They are in agreement with the standard graphite structure that has a reference code (96-901-2231) (Li, et al., 2007). Sharp diffraction peak at 2θ = 26.6255° with C (002) and d-spacing of 3.34802 Ǻ was observed. A picture of the anode is shown in Fig.6-a, while, the image of SEM related to porous graphite anode is presented in Fig.6-b with a magnification of 7500×. High porosity with large pores formed between interconnected structures was observed, which is entirely differed than the normal, nonporous solid graphite. The BET surface area results confirmed that porous graphite has a value of a specific surface area equal to 22.75 m²/g, which is higher than those observed in the graphite felt (SGL carbon, GFA6 EA) (2.73 m 2 g −1 ) (Jiang, et al., 2019). Fig.7-a shows the decay of Cd concentration verses time at different applied cell voltages. The effect of the applied cell voltage was studied at an initial cadmium concentration of 25 ppm, time of electrolysis (6 hr) and pH=7. On the other hand, Fig.7-b displays the corresponding current decay with time. It was observed that increasing the applied cell voltage leads to a decrease in the final concentration of Cd starting from 1.2V to 1.8 V where the final Cd concentration became lower than 1ppm at 1.8V. A similar observation was found by Chen, et al .2016. However, increasing cell potential from 0.6V to 0.9V; led to increase not decrease the final concentration of cadmium. The interpretation of this behaviour may be the system was under activation control at the applied voltage lower than 9V; hence fluctuation in the concentration of Cd may be happened, besides the hydrogen effect is not significant leading to increase current efficiency to approach 100% (Nancharaiah, et al., 2015). Increasing the applied cell voltage also leads to increase the value of the maximum peak of current and the final value of current ( Fig.7-b). For the applied cell voltage from 0.6 to 1.8 V, the maximum peak of current increased significantly from 1.4 mA to 7 mA, at the same time, the final Cd concentration decreased from 3.88 ppm to 0.45 ppm. The experimental observation confirmed the undetectable of cadmium in the anodic chamber at the studied applied cell voltages hence eliminating the possibility of cadmium adverse effects on the anodic biofilms (Chen, et al., 2016).With the increasing of the applied potential, the pH values of the cathode effluent increased from an initial 7 to a range of 7.08 to 9.10. The increase in pH results from the reduction of water on the cathode surface leading to evolution of hydrogen as a side reaction and release OHions in the catholyte (Colantonio and Kim, 2015). Nevertheless, these pH increases could not enhance the Cd (OH)2 formation in the MEC according to the solubility production of Cd(OH)2 ( Ksp = 3x10 -16 ) (Mortimer, 2008). Table 3 shows the removal efficiency, current efficiency based on cadmium reduction, and energy consumption at different applied cell voltages. It is essential to note that values of current efficiency and energy consumption were determined based on Equations (4) and (5), which governed entirely by the current of power supply. It can be seen that increasing cell voltage from 0.6 to 1.8 V leads to increase the removal efficiency from 84.47% to 98.19% respectively. The increase of removal efficiencies was related to the fact that the absolute values of the cathodic potential increased with the applied cell voltage; hence more cadmium deposition is occurred (Nancharaiah, et al., 2015). It is clear that current efficiency decreases with increasing applied cell voltage while energy consumption increases with the increase in cell voltage. Results of current efficiency shows values higher than 100% approve that cadmium deposition is resulted from sharing other sources of current such as current produced by the anodic substrate (bacteria) in addition to that produced by power supply. At lower cell voltage, the contribution of current from the anodic substrate is higher than the current from power supply while at higher value of voltage the contribution of bacteria is decreased. The same behaviour was observed by Sleutels et al., 2011. Results of current efficiency confirmed the activity of Bactria in the soil, releasing more electrons to the anode via the oxidation of organic compounds at the anodic chamber. Effect of applied cell voltage Energy consumption was increased with the increase of cell voltage, ranging from 0.05 kWh/kg at 0.6 V to 1.94 kWh/kg at 1.8 V. highlighted the benefits of energy-efficient MEC in the present work over convention electrolysis cell used for cadmium reduction (Sulaymon, et al., 2017). The energy consumption in the present work is also lower than that observed at the previous works (Modin, et al., 2012;Chen, et al., 2016;Wang, et al., 2016). It was cleared that the cell voltage of 0.6Volt gives better current efficiency and lower energy consumption. However, the removal efficiency was very low. To improve the removal efficiency at this value of cell voltage further electrolysis time is needed, which may be resulted in reducing the current efficiency and increasing power consumption. Therefor cell potential 1.8V was adopted for further investigation of the effect of other parameters. Fig.8-a illustrates the cadmium concentrations profiles with time at different initial Cd concentrations for operation at an applied voltage of 1.8V, time of electrolysis of 6hr, and pH=7. It can be seen that increasing of initial cadmium concentration from 25 ppm to 125 ppm results in obtaining the same final value of Cd concentration, which is lower than 1ppm. A similar observation was found by Chen, et al .2016. Choi, et al., 2014 found that cadmium removal efficiency was 93.43 ± 0.17%, 93.30 ± 0.74%, for 50 ppm, 100 ppm after 60 h at an applied voltage 1.57 V in their work for cadmium recovery by coupling double microbial fuel cells. This is an Journal of Engineering Volume 26 December 2020 Number 12 122 indication that the system within this range of concentrations is operated at electrode potential higher (more negative) than standard potential of cadmium reduction (-0.40 V vs. SHE) (Colantonio and Kim, 2016); hence most of cadmium was removed. However, current efficiency would be lower at low concentration of cadmium due to compete of hydrogen evolution. Fig.8-b shows the corresponding current decay with time where the maximum current peak increased from 7 mA to 10 mA when the concentration increased from 25ppm to 125ppm. Such an increase of current with the initial concentrations might be related to the increase of catholyte conductivities. The increase of catholyte conductivities effectively reduced the internal resistance of BES system, thus improved the system performance (Jiang, et al., 2014). Table 4 represents the removal efficiency, current efficiency, and energy consumption at different initial Cd concentrations. It was observed that increasing of initial concentration leads to higher current efficiency and lower energy consumption. As shown in Table 4 concentration of 125ppm gives a complete removal efficiency of cadmium (100%) at 83.57% current efficiency and lower energy consumption in comparison with 25ppm therefore, concentration of 125ppm was chosen for further study of the effect of operation parameters. The energy consumption in this case is lower than previous works ( Fig.9-a displays the Cd concentration decay with time at different initial pH of the catholyte and the corresponding current decay is shown in Fig. 9-b for operation at an applied cell voltage of 1.8V, initial Cd concentration of 125ppm, and time of electrolysis of 6hr. The results confirmed that the performance of the system completely differs at pH=1 than other pH values where the effect of side reaction is more noticeable. At pH=1, the peak of current is maximum (19mA), then decreased sharply to 6.89 mA at pH=9. Table 5 shows the effect of pH on the removal efficiency, current efficiency, and energy consumption. Results show clearly how the initial pH effect significantly on the removal efficiency where it was no more than 8.22% at pH=1 then increased rapidly to 80.64% at pH=3 and to 100% at pH=7 then decreased to 96.94% at pH=5 and 94.85% at pH=9. It was observed that pH =9 gives higher current efficiency and lower energy consumption but the removal efficiency not exceeded 94.85%, besides the final concentration of cadmium (6.44ppm) is greater than 1ppm (allowable limit); therefore, it is preferred to select pH =7 for further study since it gave complete removal efficiency of Cd and its related energy consumption is lower than previous works ( Table 5. Cd removal efficiency (RE), current efficiency (CE), and energy consumption (EC) of the microbial electrolysis cell, changing with the initial pH. Effect of mesh size Fig.10-a displays Cd concentration decay with time for the two cases the first is without using screens (flat cathode), while the second is with using screens at different mesh numbers for operating at an applied voltage of 1.8V ,initial Cd concentration of 125ppm, time of electrolysis of 6hr, and pH=7. The corresponding current decay is shown in Fig.10-b. Table 6 illustrates the removal efficiency, current efficiency, and energy consumption in the case of using meshes and flat plate alone. It is clear that a major enhancement of removal efficiency was observed when using screens as a packed bed where the removal efficiency increased from 70% to 100% when using screens with mesh no. 30 in comparison with flat plate. This is an indication of the higher surface area of screens, which leads to more reduction of cadmium. The results showed that using screens with mesh number greater than 30 results in reducing in removal efficiency. The lowering of removal efficiency of screens with mesh numbers 40 and 60 may be resulted from the lower turbulence promoter hence lower mass transfer occurred (Sulaymon, et al., 2017). The current efficiency is higher with a flat plate, besides energy consumption is the lower indicating the feasibility of using flat plate but at lower removal efficiency with a final cadmium concentration (37.5ppm) which is higher than the allowable limit. Therefore it is recommended to use screens with mesh number of 30 for removing of cadmium. Table 6. Cd removal efficiency (RE), current efficiency (CE), and energy consumption (EC) of the microbial electrolysis cell, changing with screen mesh number. 4.CONCLUSIONS The results of the present work confirmed the possibility of the complete cadmium removal from simulated wastewater using a fixed bed bio-electrochemical reactor. The performance of this type of electrochemical reactor depends on various operating factors such as an applied cell voltage, initial Cd concentration, and initial pH. The applied cell voltage is the most significant factor in the removal of cadmium where applying cell voltage of 1.8 V could reduce the final concentration of cadmium to lower than 1ppm at the initial concentration of cadmium of 25ppm. The value of pH close to (1) is not recommended since most of the current goes for hydrogen evolution resulting in lower Cd removal efficiency. The present design of the cell can remove Cd with initial concentration ranging from 25-125ppm. Using the fixed bed reactor gave better results than a flat plate because of the higher surface area of the bed. These results suggest that a Fixed Bed bioelectrochemical reactor is a promising reactor design for the effective removal of heavy metals from wastewater. The optimum operating conditions were found to be a cell voltage of 1.8V, an initial Cd concentration of 125ppm, pH=7, and time of electrolysis equal to 6hr. Where a complete removal efficiency of cadmium (100%) was obtained at a current efficiency of 83.57% with an energy consumption of 0.57 kWh/kg Cd.
2020-12-03T09:07:45.968Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "6d439e8179c8e266f538eff7c39d6e2f049cda0e", "oa_license": "CCBY", "oa_url": "https://joe.uobaghdad.edu.iq/index.php/main/article/download/j.eng.2020.12.07/845", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "a419a4febdf4799c1fdd41e6bb91c21ba372d4f3", "s2fieldsofstudy": [ "Environmental Science", "Chemistry" ], "extfieldsofstudy": [ "Materials Science" ] }
169647627
pes2o/s2orc
v3-fos-license
Leapfrogging health system interventions to accelerate the achievement of non-communicable disease targets in Sri Lanka The National Multisectoral Action Plan for the Prevention and Control of Non-Communicable Diseases Sri Lanka 2016-2025 targets a 25% relative reduction in premature mortality from cardiovascular disease, cancer, diabetes or chronic respiratory diseases by 2025 (1). It has also adopted a set of voluntary targets to be achieved by 2025 in relation to the control of major non-communicable disease (NCD) risk factors. Sri Lanka adopted Sustainable Development Goals (SDGs), committing action to end poverty, protect the planet and ensure that all people enjoy peace and prosperity. The third SDG (SDG 3) pledges to ensure healthy lives and to promote well-being of all at all ages and a relative reduction of 25% of premature mortality from cardiovascular disease, cancer, diabetes or chronic respiratory diseases. ensure healthy lives and to promote well-being of all at all ages and a relative reduction of 25% of premature mortality from cardiovascular disease, cancer, diabetes or chronic respiratory diseases. The latest evidence on monitoring the progress of NCD actions in countries has estimated the risk of premature death from NCD among those aged 30-70 years to be 17% (22% for males and 13% for females) in Sri Lanka in 2018. It projects a linear trend and predicts that the country will not be able to achieve the voluntarily set target of 25% relative reduction in premature mortality NCDs by 2025 (2) (Figure 1). Furthermore, the report clearly demonstrates that none of the main risk factors of NCDs in Sri Lanka is on the path to reach the voluntarily set targets by 2025 ( Figure 2). Thus, Sri Lanka needs to urgently accelerate its efforts to intervene the NCD risk factors to move towards paths to reach the targets. The World Health Organization (WHO) has identified a package of 16 'Best Buy' interventions that are cost-effective, affordable, feasible and scalable in all settings to address the growing burden of NCDs (3). The 'Best Buys' were first designated in 2011 and were updated in 2017 based on the latest evidence of intervention impact and cost. They are all forms of health system interventions of core population interventions (tobacco, alcohol, nutrition and physical activity) and individual services (early detection and management of cardiovascular disease, diabetes, lung diseases and cancer). From financing perspective, these interventions cost as little as one dollar per person per year. Other than the 'Best Buys', the WHO advocates a set of 'Good Buys'which are also evidencebased effective health system interventions though the cost is higher than one dollar per person per year (3). The latest assessment of the status of implementation of the 'Best Buy' NCD interventions in Sri Lanka shows that only five of the 16 'Best Buys' are fully implemented while another five have been partially implemented (4). An estimate of the number of lives that can be saved in Sri Lanka by 2015 by implementing all of the WHO 'Best Buys' has been estimated as 17,500 (2), reflecting the potential to move the country appreciably towards the NCD mortality reduction Source: NCD Country Profiles 2018, World Health Organization targets. Thus, it seems rational that Sri Lanka explore options to be more assertive and innovative in the implementation of NCD interventions. This editorial aims to introduce the concept of 'leapfrogging', an approach to accelerate gains in health system through interventions implemented at the primary health care level. The idea of leapfrogging is not new. It is drawn from successes of other sectors applied to economic growth, sustainable and green development, even to military strategy. Application of the concept of leapfrogging in NCD interventions in primary health care draws experience from the European Region of WHO. Countries in Eastern Europe and Central Asia adopted the concept during the last decade to be successful in achieving the 25% relative reduction of premature mortality target for NCDs, ahead of its stipulated timing of 2025 (5). Furthermore, the World Economic Forum also has identified leapfrogging as a solution to sustain the health system interventions in emerging economies as they try to catch up with more advanced health systems to reach NCD targets (6). Open Access behaviour to accelerate the development of a health system. In simple terms, it means skipping inefficient, more expensive and even dead-end intermediary steps in the interventional processes and moving directly to more advanced approaches representing today's good practices in delivering NCD interventions to make progress more quickly. While the application of 'leapfrogging' solutions needs a careful analysis of the health system of a country to identify the opportunities, two key common strategies have been documented (5). One key strategy to 'leapfrog' is to adopt the socalled frugal and disruptive technological innovations and apply them in health system interventions to NCDs. The learning comes from other sectors (e.g. telecommunications, energy) where results have been achieved much faster through rapid adoption of technological innovation at scale. For example, in many low-income countries, mobile telephone has gained widespread adoption even in areas with no access to landlines. Emanating from this learning, it is advocated that health system interventions adopt emerging technological breakthroughs such as artificial intelligence, telemedicine and other fifth generation wireless technologies to leapfrog in the field of health for rapid implementation of interventions and effective NCD outcomes. Adopting new operating models at the primary health care level is the other key strategy to leapfrog in NCD interventions. There is plenty of evidence-based good practices on innovative changes to the operating models which have led to the achievement of effective NCD outcomes (7). Adopting innovative ways to deliver NCD interventions in a more people-centred manner by engaging the public, working more closely with the private sector, moving towards multi-profile primary care teams with redistribution of tasks of health workers to deliver NCD interventions, using financial incentives to encourage health workers are some of the common examples. It must be emphasized that the key to success of any large-scale transformation and adopting the leapfrogging concept is a comprehensive well-aligned health system, which adopts and integrates large-scale technological and organizational innovations that are proven to be effective. The aim of this Editorial is to inspire the 'champions' among the public health practitioners who can take bold decisions and think out of box thinking to 'leapfrog' for better health system interventions to accelerate achievement of NCD targets.
2019-05-30T23:47:10.245Z
2018-12-31T00:00:00.000
{ "year": 2018, "sha1": "d684899fd5aaaa5fe10dbdae8de839537b5efea0", "oa_license": "CCBY", "oa_url": "http://jccpsl.sljol.info/articles/10.4038/jccpsl.v24i4.8189/galley/5941/download/", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "6712f42c0a87cd036db7458323a1c67c97dfd5d0", "s2fieldsofstudy": [ "Medicine", "Economics" ], "extfieldsofstudy": [ "Business" ] }
234793574
pes2o/s2orc
v3-fos-license
Bell’s palsy following the Ad26.COV2.S COVID-19 vaccination (Ad26.CoV2.S) To our describing incidence and we highlight this case to further discussion and reporting of adverse effects report a significantly higher rate of Bell’s Palsy after the mRNA vaccines than after the Ad26.COV2.S COVID-19 vaccination. Although rare thrombotic complications have been reported after the injection of Ad26.COV2.S COVID-19 vaccination, 6 relatively few reports of Bell’s Palsy have been described. We present the first case report describing an incidence of Bell’s Palsy after the Janssen Ad26.COV2.S vaccination. This case highlights the importance of continuing to monitor for side effects and complications on an individual basis following this novel vaccine. Learning point for clinicians We describe a case of Bell's palsy after Janssen COVID19 (Ad26.CoV2.S) vaccination. To the best of our knowledge, this is the first case report describing an incidence of Bell's palsy after the injection of Ad26.CoV2.S vaccination, and we highlight this case to further discussion and review. Case presentation A 62-year-old Pilipino female with a past medical history of type 2 diabetes mellitus, hypertension and hyperlipidemia presented to the emergency department with a 2-day history of right facial droop 20 days following the Ad26.COV2.S vaccination. The patient denied prior history of stroke, transient ischemic attack, Bell's palsy or other unexplained neurological symptoms. She also denied any recent viral infection or facial trauma. At the time of presentation, she denied tingling, ear pain, hearing loss, dysgeusia, drooling, vision problems or rashes. Her physical examination was notable for near-complete paralysis of the right lower face and significant paralysis of the right upper face with incomplete eye closure, consistent with a House-Brackmann score 4 Bell's Palsy ( Figure 1). Her motor, sensory, gait and cerebellar examination were otherwise normal. Head computed tomography and brain magnetic resonance imaging were unremarkable, without infarct, demyelination or peripheral nerve enhancement. She was diagnosed with Bell's Palsy related to Coronavirus Disease 2019 (COVID-19) vaccination. Discussion Although most cases are idiopathic, peripheral facial nerve palsy can be observed in the context of viral infection, trauma, pregnancy and other inflammatory, autoimmune and neoplastic conditions. Facial nerve palsy has also been reported as an adverse event following vaccination, most often following the influenza vaccine. 1,2 We report a patient who developed facial nerve palsy 20 days after the administration of the Janssen coronavirus (Ad26.COV2.S) vaccine. Although we cannot directly attribute our patient's presentation to the vaccine, her presentation was temporally related. We believe that this case can bring awareness to a potential adverse effect, and we highlight this case to further discussion and review. The COVID-19 pandemic has caused substantial morbidity and mortality around the world, and the development of vaccines has drawn the global attention to stop the spread of the virus. The Ad26.COV2.S vaccine has been issued emergency use authorization by the U.S. Food and Drug Administration (FDA) as a third vaccine against COVID-19 on 27 February 2021 with a relatively benign side effect profile. 3 Although the phase 3 clinical trial for Ad26.COV2.S reported three cases of Bell's palsy, this was not significantly different from placebo and there is no evidence to support a causal relationship between the vaccine and facial nerve palsy. 4 However, given the expedited production of the vaccine and the novelty associated with its production, side effects and adverse effects are still under investigation. Some recent studies have supported an association of Bell's palsy after the mRNA COVID-19 vaccines, even after the FDA's phase 3 trial did not find the frequency of Bell's palsy above the general population. 5 reporting of adverse effects report a significantly higher rate of Bell's Palsy after the mRNA vaccines than after the Ad26.COV2.S COVID-19 vaccination. Although rare thrombotic complications have been reported after the injection of Ad26.COV2.S COVID-19 vaccination, 6 relatively few reports of Bell's Palsy have been described. We present the first case report describing an incidence of Bell's Palsy after the Janssen Ad26.COV2.S vaccination. This case highlights the importance of continuing to monitor for side effects and complications on an individual basis following this novel vaccine. Patient consent Written informed consent was obtained from the patient for publication of this case report and accompanying images. Statement of ethics Written informed consent was obtained from the patient for publication of this report and accompanying images.
2021-05-21T06:16:49.950Z
2021-05-20T00:00:00.000
{ "year": 2021, "sha1": "f002844486c9189d0d39eba8d49cc7ca7e267900", "oa_license": null, "oa_url": "https://academic.oup.com/qjmed/article-pdf/114/9/657/41146564/hcab143.pdf", "oa_status": "BRONZE", "pdf_src": "Adhoc", "pdf_hash": "29a7a3db03c076bf32cff7d8f70af124dd1e4142", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
29558033
pes2o/s2orc
v3-fos-license
Protein Nitration in a Mouse Model of Familial Amyotrophic Lateral Sclerosis Multiple mechanisms have been proposed to contribute to amyotrophic lateral sclerosis (ALS) pathogenesis, including oxidative stress. Early evidence of a role for oxidative damage was based on the finding, in patients and murine models, of high levels of markers, such as free nitrotyrosine (NT). However, no comprehensive study on the protein targets of nitration in ALS has been reported. We found an increased level of NT immunoreactivity in spinal cord protein extracts of a transgenic mouse model of familial ALS (FALS) at a presymptomatic stage of the disease compared with age-matched controls. NT immunoreactivity is increased in the soluble fraction of spinal cord homogenates and is found as a punctate staining in motor neuron perikarya of presymptomatic FALS mice. Using a proteome-based strategy, we identified proteins nitrated in vivo, under physiological or pathological conditions, and compared their level of specific nitration. α- and γ-enolase, ATP synthase β chain, and heat shock cognate 71-kDa protein and actin were overnitrated in presymptomatic FALS mice. We identified by matrix-assisted laser desorption/ionization mass spectrometry 16 sites of nitration in proteins oxidized in vivo. In particular, α-enolase nitration at Tyr43, target also of phosphorylation, brings additional evidence on the possible interference of nitration with phosphorylation. In conclusion, we propose that protein nitration may have a role in ALS pathogenesis, acting directly by inhibiting the function of specific proteins and indirectly interfering with protein degradation pathways and phosphorylation cascades. copper-zinc superoxide dismutase (SOD1) gene. Familial and sporadic ALS cases are indistinguishable on the basis of clinical and pathological criteria, suggesting that the two forms share similar or converging pathogenetic mechanisms. Several mechanisms have been proposed to contribute to ALS pathogenesis, including excitotoxicity, mitochondrial dysfunction, impaired proteasomal function, protein aggregation, and apoptosis. However, it is not clear which is the primary event or what are the temporal relations between these pathways. Recent investigations support the notion that these mechanisms may be coordinated by oxidative stress, which can activate pathways that lead to additional oxidative stress and amplify the disease (1)(2)(3). Early evidence suggesting a role for oxidative damage in ALS came from the identification of markers of oxidative stress in the cortex and spinal cord of patients with sporadic and familial ALS (4 -6). Among these markers, nitrotyrosine (NT) has attracted attention in view of Beckman's theory, which suggests a greater propensity of SOD1 mutants to use peroxynitrite as an enzyme substrate, leading to tyrosine nitration (7). In fact, increased levels of free NT have been found in human patients and mouse models of ALS (6,8,9). However, in only very few studies has protein-bound NT been specifically characterized in connection with ALS. Increased levels of free or protein-bound NT were observed in several neurodegenerative and inflammatory diseases (10) and were usually considered a marker of peroxynitrite formation, causing irreversible protein damage (11). Immunohistochemistry studies have shown that nitrated proteins accumulate in Lewy bodies of a number of neurodegenerative synucleopathies (12). Using specific antibodies that recognized only nitrated ␣-synuclein, it was seen that the majority of the Lewy bodies contained nitrated ␣-synuclein, indicating that this modification may participate in their formation (13). Recent findings have raised the question of whether protein nitration might also be a cellular signaling mechanism (14). In fact, it has been demonstrated that protein nitration is a reversible and selective process, like protein phosphorylation (15,16). The dynamic nature of the nitration was revealed by denitration and renitration of proteins in mitochondria subjected to hypoxia-anoxia and reoxygenation cycles. Characterization of the putative tyrosine denitrase activity, which has been described in preliminary reports (17), would give conclusive evidence of the reversibility of this biological process and its role in signal transduction. To further explore these concepts and to clarify the role of protein nitration in ALS pathogenesis, detailed and comprehensive studies of the target proteins are needed. However, from a technical point of view, it is challenging to analyze nitrated proteins in vivo. The modification is rare, and common biochemical procedures may lead to loss of NT through conversion to aminotyrosine (18). However, proteomic tools based on immunoblotting techniques have been recently adapted to the analysis of nitrated proteins and applied to investigate several pathological situations (19). We used a proteomic approach to analyze the nitrated proteins in spinal cord extracts of a murine model of a familial form of ALS (FALS): a transgenic (Tg) mouse overexpressing human SOD1 carrying G93A mutation, which develops progressive motor dysfunction leading to paralysis and death (20). Moreover, by matrix-assisted laser desorption/ionization (MALDI) mass spectrometry, we identified 16 sites of nitration in proteins oxidized in vivo. Transgenic Mouse Models Tg mice originally obtained from Jackson Laboratories, expressing a high copy number of mutant human SOD1 with a G93A substitution or WT human SOD1 mice, were bred and maintained in a C57BL/6 mice strain at the Consorzio Mario Negri Sud, S. Maria Imbaro (CH), Italy. Tg mice are identified by PCR (21). The mice were housed at a temperature of 21 Ϯ 1°C with a relative humidity of 55 Ϯ 10% and 12 h of light. Food (standard pellets) and water were supplied ad libitum. Female Tg SOD1 G93A mice were killed at 9, 14, and 20 weeks of age, corresponding to presymptomatic, early symptomatic, and late stages, respectively, of the motor dysfunction (22). Age-matched female Tg SOD1 WT mice were used as controls. Non-Tg C57BL/6 mice were used as further controls for immunohistochemistry. Procedures involving animals and their care were conducted in conformity with the institutional guidelines that are in compliance with national (D. Sample Preparation Soluble Proteins-Spinal cords were suspended 1:4 (w/v) in a lysis buffer (10 mM Tris-HCl buffer, pH 7.4, containing 10 mM EDTA, 10 mM dithiothreitol, 50 mM iodoacetamide, 1 tablet of Complete™/10 ml of buffer, Mini Protease Inhibitor Mixture (Roche Applied Science), and 5 M MG132 proteasome inhibitor (Sigma)) and sonicated. Homogenates were ultracentrifuged at 55,000 ϫ g for 30 min at 4°C, and soluble proteins separated from the pellet. Triton-insoluble Proteins or "Aggregates"-The pellet was further processed following a published protocol with some modifications (23) to enrich the fraction of ubiquitinated proteins. Briefly, the pellet was resuspended in ice-cold 15 mM Tris-HCl buffer, pH 7.6, containing 0.25 M sucrose, 150 mM KCl, and 2% of nonionic detergent Triton X-100 and shaken for 5 h at 4°C. Samples were centrifuged at 10,000 ϫ g at 4°C for 15 min to obtain Triton-resistant pellets. Dot Blot This analysis was done using a dot-blot apparatus, Minifold II Slot-Blot System (Schleicher & Schuell). A polyvinylidene difluoride membrane (Millipore Corp.) was soaked in methanol for a few seconds and then in ultrapure water for 2 min and conditioned in 20 mM Tris-HCl, pH 7.4, prior to assembly in the apparatus. Soluble proteins were directly loaded onto the membrane, whereas Triton-insoluble proteins were first resuspended in 0.5% SDS in 20 mM Tris-HCl, pH 7.4, and then diluted 1:10 with the same Tris-HCl buffer before application. Deposition of each sample was made on the membrane by vacuum filtration. Aliquots (2 g) of samples from Tg SOD1 G93A or Tg SOD1 WT mice of different ages were loaded on the membrane. The membrane was probed with the anti-NT antibody as described under "Western Blotting." NT immunoreactivity was normalized to the actual amount of proteins loaded on each spot in the membrane as detected after Coomassie staining. Values were expressed as means Ϯ S.E. Statistical analysis was performed by Student's t test. Immunohistochemistry Mice were anesthetized with Equithesin (1% phenobarbitol, 4% (v/v) chloral hydrate, 30 l/10 g intraperitoneal) and transcardially perfused with 20 ml of PBS followed by 50 ml of sodium phosphate-buffered 4% paraformaldehyde solution. Tissues were rapidly removed, postfixed in fixative for 2 h, transferred to 20% sucrose solution in phosphatebuffered saline (PBS) overnight and then to 30% sucrose solution until they sank, and finally frozen in 2-methylbutane at Ϫ45°C. Transverse lumbar spinal cord sections at the L3-L5 level (30 m thick) were cut on a cryostat and processed free-floating in multiwell plates. Cryosections were permeabilized with 70% methanol in PBS for 30 min at room temperature and blocked with 5% normal goat serum, 0.1% Triton X-100 in PBS for 30 min at room temperature. Then sections were incubated overnight, with constant agitation, with primary antibody (monoclonal anti-NT, clone HM.11, HyCult Biotechnology) diluted 1:100 in PBS containing 0.1% Triton X-100 and 5% normal goat serum. Immune reactions were revealed by a 60-min incubation in goat antimouse biotinylated IgG diluted 1:200 (Vector Laboratories), followed by 60-min incubation in avidin-biotin-peroxidase complex (ABC; Vector), using 3Ј-3-diaminobenzidine as chromogen. The specificity of the NT immunostaining was evaluated in sections from control mice processed with omission of the primary antiserum or with primary antiserum that was preabsorbed with 3-NT and developed under the same conditions. Two-dimensional Electrophoresis Soluble proteins extracted from the spinal cord of G93A or WT mice were precipitated overnight with 4 volumes of methanol at Ϫ20°C and resuspended in 8 M urea. Two-thirds of the sample from a spinal cord was used for preparative two-dimensional electrophoresis; one-third after two-dimensional electrophoresis was examined by Western blotting (WB). Two-dimensional electrophoresis was done essentially as previously described (24). Briefly, homemade 8-cm-long immobilized pH gradient gels, which cover, with an exponential course, pH range 4 -10, (25), were used. Aliquots of sample solution were loaded on the immobilized pH gradient gels near the anode on a stack of Paratex pads and run in a Multiphor II apparatus (Amersham Biosciences). Pairs, with a WT and a G93A sample, were mounted tail-to-end on a gradient of 4 -20% polyacrylamide, SDS gel cast in a Protean II apparatus (Bio-Rad) with the discontinuous buffer system of Laemmli. After completion of the run, the preparative two-dimensional electrophoresis gel was stained with Coomassie Blue, and the other two-dimensional electrophoresis gel was transferred onto a polyvinylidene difluoride membrane. Western Blotting Each WB experiment was done on a single polyvinylidene difluoride membrane, containing WT and G93A samples run in parallel. The membranes were incubated for 1 h at room temperature with a blocking buffer (5% milk in Tris-buffered saline containing 0.1% Tween) and probed overnight at 4°C with a monoclonal antibody against NT diluted 1:1000 (clone HM.11; HyCult Biotechnology) in blocking buffer. The membrane was then washed and incubated for 1 h at room temperature with goat anti-mouse peroxidase-conjugated secondary antibody diluted 1:5000 (Santa Cruz Biotechnology, Inc., Santa Cruz, CA). The immunopositive spots were visualized using a sensitive chemiluminescent protein detection system, ECL Plus (Amersham Biosciences). To reveal false immunopositive spots, membranes were first stripped with Restore™ Western blot stripping buffer (Pierce) and then chemically reduced to convert NT to aminotyrosine, with 10 mM sodium dithionite in 50 mM pyridine acetate buffer, pH 5.0, for 1 h at room temperature, as previously described (26). After the reaction, the membranes were extensively washed with distilled water and probed again with the anti-NT antibody. The immunopositive spots detected after the reduction were not considered for further analysis. Gel/Blot Image Analysis and Quantification The two-dimensional electrophoresis gels and the two-dimensional WB images were captured by a high resolution scanner, Expression 1680 Pro (Epson). Densitometry and image analysis were done by Progenesis software, Workstation version 2003.03 (Nonlinear Dynamics). Gel/blot comparison was done by using the specific warping algorithm of the software in the manual mode, placing seeding points on recognizable, intense immunopositive spots. The relative immunoreactivity for each protein spot was calculated as the ratio of the pixel volume of the immunoreactive spot on the blot to the pixel volume of the matched spot on the two-dimensional electrophoresis gel. NT-immunoreactive spot pixel volumes were normalized to the total immunoreactivity of the blot, and gel spot pixel volumes were normalized to the total spot volume of the gel. The -fold increase or decrease in NT immunoreactivity was calculated as the ratio between the relative immunoreactivity in Tg SOD1 G93A and in Tg SOD1 WT mice. -Fold changes of relative NT immunoreactivity in Tg SOD1 G93A compared with Tg SOD1 WT mice in the four experiments were expressed as the mean Ϯ S.E. Statistical analysis was done by one-sample Student's t test, with p Ͻ 0.05 indicating -fold changes significantly greater than 1. Protein Identification Protein spots were excised from two-dimensional electrophoresis gels, destained for a few hours in 25 mM ammonium bicarbonate, 40% ethanol, and washed with sequentially rising percentages of acetonitrile. Proteins were gel-digested overnight at 30°C with trypsin (Promega) at a concentration of 12 ng/l in a 25 mM ammonium bicarbonate, 10% acetonitrile solution. When necessary, tryptic digests were concentrated and desalted using ZipTip® pipette tips with C18 resin and a 0.2-l bed volume (Millipore Corp.). Peptide mass fingerprinting (PMF) was done on a ReflexIII TM MALDI mass spectrometer (Bruker Daltonics) using ␣-cyano-4-hydroxycinnamic acid as matrix, as previously described (27). The mass spectra were internally calibrated with trypsin autolysis fragments, routinely obtaining accuracy better than 50 ppm. Data base (NCBInr) searches were done using the Mascot program (available on the World Wide Web at www.matrixscience.com), allowing up to one missed trypsin cleavage and a mass tolerance of Ϯ0.1 Da. In the Mascot searching of Mus musculus sequences deposited in NCBInr, probability-based MOWSE scores (56) greater than 61 were considered significant (p Ͻ 0.05). In the case of a low score, the identification was confirmed by PMF analysis of endoprotease V8 (Sigma) protein digest. The enzyme was used at a concentration of 12 ng/l in a 25 mM ammonium bicarbonate, 5% acetonitrile solution. Spectra originating from parallel protein digestions were compared pairwise to discard common peaks from endoprotease V8 autodigestion. Data base searches were done with the Mascot program, allowing up to two missed cleavages. NT Immunoreactivity Is Higher in Spinal Cord of Tg SOD1 G93A Mice than Tg SOD1 WT Mice at All Stages of the Disease-We measured the total NT immunoreactivity, which comprises free and protein-bound NT, in spinal cord and hippocampus soluble protein extracts from Tg SOD1 G93A mice at presymptomatic (9 weeks), symptomatic (14 weeks), and late (20 weeks) stages of the disease and from age-matched control SOD1 WT mice. We found dot-blot analysis particularly suitable for this type of experiment, since it involves limited manipulation of the sample, and NT modification is maximally preserved, whereas in classical SDS-polyacrylamide gel electrophoresis/WB analysis, boiling the sample with reducing agents may cause the conversion of NT to aminotyrosine (18). In addition, only a small amount of sample is required, and all of the replicates can be analyzed at the same time, allowing accurate comparison. At all ages examined, the level of NT immunoreactivity in G93A spinal cord soluble protein extracts was higher than for age-matched controls (Fig. 1A), with a peak of increase of about 2-fold at a presymptomatic stage. We performed the same experiment with soluble protein extracts from hippocampus, which is not affected by the disease. In hippocampus (Fig. 1B), the levels of NT immunoreactivity were comparable for G93A and WT samples at 14 and 20 weeks. The NT level in G93A samples was higher, but not significantly, only at 9 weeks. On the other hand, the NT level in hippocampus extracts was about 2.5-fold less than in spinal cord extracts (data not shown). Nitrated Proteins Do Not Selectively Accumulate in Protein Aggregates-We tested whether nitrated proteins participate in the formation of protein aggregates and accumulate in cell inclusions. The most widely seen inclusions in ALS immunostain for ubiquitin (both sporadic and FALS) and for SOD1 (SOD1-linked FALS cases) (28). Triton-insoluble fractions of protein extracts are rich in ubiquitinated proteins (29) and in SOD1 (30). The Triton-resistant pellet was our protein aggregate preparation. Proteins were extracted from spinal cord of six Tg SOD1 G93A or Tg SOD1 WT mice at 9, 14, and 20 weeks of age, and NT immunoreactivity was measured in the Tritoninsoluble fraction by dot-blot analysis. The levels of NT immunoreactivity were not significantly different between G93A and WT samples at all ages examined (data not shown). As shown in Fig. 2, NT immunoreactivity was substantially lower, about 10-fold, in the aggregates from G93A and WT samples at all ages compared with soluble extracts from 9-week-old Tg SOD1 WT mice. NT Immunolabeling Appears as a Punctate Staining in the Motor Neuron Perikarya of Presymptomatic Tg SOD1 G93A Mice-The overall intensity of immunostaining was comparable in non-Tg, SOD1 WT, and presymptomatic SOD1 G93A mice (Fig. 3, A-C). However, in four of six presymptomatic SOD1 G93A mice, a peculiar punctate staining was noticed in the motor neurons with spots intensely immunolabeled (Fig. 3C). This phenomenon was observed in highly vacuolated, degenerating motor neurons as well as in apparently healthy ones. At the symptomatic stage, there was an overall increase of NT immunoreactivity throughout the gray and white matter of the whole lumbar spinal cord sections, but a punctate staining was not observed (Fig. 3D). The increase in immunoreactivity was diffuse in the neuropil and was particularly concentrated in small cells and small tissue elements. Some of them appeared as markedly shrunken neurons, whereas others FIG. 1. NT immunoreactivity in soluble protein extracts of spinal cord and hippocampus of Tg SOD1 G93A mice throughout disease progression in comparison with healthy mice. NT immunoreactivity was measured by dot-blot analysis of protein extracts from spinal cord (A) and hippocampus (B) of Tg SOD1 G93A mice at 9, 14, and 20 weeks of age, corresponding to an early, symptomatic, and late stage of the disease. Age-matched Tg SOD1 WT mice were used as controls. Six mice per genotype and age were used for the analysis. NT immunoreactivity was normalized to the total protein loaded, as measured by densitometry of the dot-blot membrane after Coomassie staining. Values are the means Ϯ S.E. of NT immunoreactivity of the samples calculated as a percentage of the 9-week-old WT sample mean. An asterisk indicates a G93A sample mean significantly higher (p Ͻ 0.05) than the age-matched WT sample mean, as assessed by Student's t test. showed morphology typical of hypertrophic astrocytes and reactive microglia cells. Tg SOD1 G93A and Age-matched Tg SOD1 WT Mice-To investigate the impact of nitrated proteins in ALS pathogenesis, we concentrated our attention on mice at a presymptomatic stage of the disease in comparison with agematched healthy control Tg mice. Nitrated proteins in spinal cords of Tg SOD1 G93A or Tg SOD1 WT mice were analyzed using a proteomic approach. Soluble proteins were extracted from spinal cord of 9-week-old mutated Tg and age-matched healthy control mice. Each sample was separated in duplicate by isoelectrofocusing. The second dimension for the G93A and WT samples was run in the same gel, in duplicate. One of the two-dimensional electrophoresis gels was then transferred to a single membrane for WB analysis using anti-NT antibody, and the other was stained with Coomassie Blue. The images of blot and corresponding Coomassie-stained two-dimensional electrophoresis gel were overlaid by the Progenesis software, using the warping/matching algorithm in the manual mode. The twodimensional electrophoresis gel spots corresponding to the immunoreactive spots were excised and subjected to PMF analysis for identification. Fig. 4, A and B, shows a representative gel and its corresponding anti-NT WB. Several immunoreactive spots are present in both WT and G93A samples. A few were considered false-positive because they were still present in the blot after reducing treatment with sodium dithionite or because they could not be matched to visible Coomassie-stained gel spots. The remaining 32 were identified as listed in Table I. Identification of Nitrated Proteins in Spinal Cords of Presymptomatic The nitration patterns for WT and G93A mice are very similar, whereas the level of nitration for some proteins differs. For example, as can be seen in the magnification of a portion of the blot in Fig. 4, C and D, the levels of nitration of creatine kinase B chain and actin are 2.7-and 2.0-fold higher, respectively, in G93A than in WT mice (Fig. 4, E and F) (see also Table II). The relative immunoreactivity toward the anti-NT antibody, as a quantification of the level of specific protein nitration, could be measured (Table II), since WT and G93A samples were blotted to the same membrane for consistent experimental conditions. The increased nitration (Ͼ1.5-fold) in G93A compared with WT for each protein in this WB experiment is reported in Table II. Overnitrated Proteins in Presymptomatic Tg SOD1 G93A Mice-In the two-dimensional WB/gel experiments, we found a mean value of 1.7 Ϯ 0.4-fold increase (p ϭ 0.07) of total NT immunoreactivity in G93A compared with WT samples. We used a controlled proteome-based experimental setting to compare WT and G93A samples, but we found several differences for the individual proteins among the various WB experiments. In combination with the intrinsic variability of the immunoblotting technique, it is likely that the extension and dynamics of protein nitration vary widely for the individual protein in different animals. However, we observed a number of proteins, listed in Table III and shown in Fig. 5, which were nitrated in all of the experiments and had a higher level of nitration in Tg SOD1 G93A compared with Tg SOD1 WT mice in at least three different WB. We define these proteins as "overnitrated." Among them, we reported a highly significant increase in NT level for heat shock cognate 71-kDa protein, ATP synthase ␤ chain, and ␣ enolase. Finally, a special pattern was observed for ␣-synuclein, which was significantly "undernitrated" in mutated mice in all four experiments by 2 Ϯ 0.4-fold. This could depend on the high propensity of ␣-synuclein, exposed to nitrating agents, to form peculiar nitrated cross-linked oligomers, through covalent o,oЈ-dityrosine bonds (31), which may result in a decrease of the specific level of nitration in the spot of the monomeric form. Identification of the Sites of Nitration by MALDI Mass Spectrometry-Some of the NT-immunopositive proteins, particularly the ones with higher relative immunoreactivity, were also confirmed as being nitrated by MALDI mass spectrometry (MS). During ionization/desorption in MALDI MS, nitrated fragments undergo a series of photodecomposition reactions involving the loss of one or two oxygens that can also be accompanied by further reductive reactions (32,33). Although this could contribute to the dispersion of the signal of the nitrated fragments, in our hands, MALDI MS appears rather sensitive for the analysis of in vivo nitrated proteins. In fact, we detected Tyr-containing tryptic fragments together with possibly related modified fragments with mass shifts of 45, 31, 29, 15, 13 Da compatible with NO 2 -, NHOH-, NO-, NH 2 -, and N-Tyr modifications, respectively. Table IV reports the mass ions of the unmodified tryptic fragments, the potentially nitrated tryptic fragments, and their photodecomposition products. Fig. 6, A-D, gives examples of MS FIG. 2. Comparison of the level of NT immunoreactivity in soluble protein extracts and aggregates. NT immunoreactivity was measured by dot-blot analysis of soluble protein extracts from spinal cord of 9-week-old Tg SOD1 WT mice and of Triton-insoluble proteins extracts from the spinal cord of Tg SOD1 WT and Tg SOD1 G93A mice at 9, 14, and 20 weeks of age. Dot-blot analysis was done on a pool of three G93A and WT mice, loading on the membrane a 2-g aliquot of each pooled sample. NT immunoreactivity was normalized to the total protein loaded as measured by densitometry of the dot-blot membrane after Coomassie staining. Values are expressed in arbitrary units. FIG. 3. NT immunoreactivity in ventral horn of transverse sections of lumbar spinal cord from non-Tg (A), SOD1 WT (B), and SOD1 G93A mice at presymptomatic (C) and symptomatic (D) stages. In non-Tg and SOD1 WT mice, low NT immunostaining is distributed in the gray matter except for the motor neurons that show a higher immunolabeling (arrows; see also insets at higher magnification). In presymptomatic SOD1 G93A mice (C), some motor neurons show a remarkable, punctated staining of perikarya. This is evident in both massively vacuolated (lower inset in C) and apparently normal (upper inset in C) motor neurons. In symptomatic SOD1 G93A mice (D), a higher, diffuse NT immunoreactivity is observed in both gray and white matter. Small intensely immunolabeled cells, apparently hypertrophic astrocytes (upper inset in D), are distributed in gray and white matter. Intensely labeled shrunken motor neurons (lower inset in D) are observed. Scale bars, 100 mm (lower power) and 20-mm (insets). analysis of the nitrated fragments for actin, ␣-enolase, phosphoglycerate mutase, and creatine kinase B chain. Further MS/MS analysis of the possibly nitrated peptides was not possible because of low recovery of the modified peptides, as already observed by others when analyzing two-dimensional electrophoresis-separated proteins (34). DISCUSSION Markers of oxidative damage and increased levels of free NT have been found in Tg mice and in patients with ALS (6,8,9), but no comprehensive study on the protein targets has yet been reported. Our investigation was set out to clarify the role of oxidative stress, in particular protein nitration, in ALS pathogenesis by a specific proteomic approach. We used one of the most widely studied ALS animal models, the Tg SOD1 G93A mouse, which exhibits many of the hallmarks of the human disease and enabled us to analyze nitrated proteins in the different stages of disease progression (20). For comparison, we used age-matched healthy Tg SOD1 WT mice. Tg SOD1 WT and G93A mice express transgenic SOD1 at the same level, as measured by densitometric analysis (data not shown), allowing accurate proteome map comparisons. Techni-cal problems exist in the detection of nitrated proteins because (i) the modification is rare in tissues and in biological fluids, even in pathological conditions (10); (ii) the modification is lost in strongly reducing conditions and possibly also under the action of not yet identified denitrase enzymes (17,18); and (iii) common chemical and immunological methods may detect false-positive nitrated proteins (35,36). Therefore, special care was needed in each experiment to avoid under-or overestimation, as described under "Materials and Methods." Using these methods, NT-bound proteins could be analyzed with high sensitivity in spinal cords of Tg SOD1 G93A/WT mice. The soluble protein fraction from mice at presymptomatic, symptomatic and end stages showed considerably higher levels of NT immunoreactivity, indicating the presence of free and protein-bound NT, compared with controls. This increase was disease-specific, since it occurred basically in spinal cord of mutated Tg mice and not (except for a tendency at 9 weeks of age) in extracts from hippocampus, tissue not affected by the disease. Furthermore, the increase appeared to be an early event. In fact, at 9 weeks, our line of Tg mice had no evident sign of neuromuscular deficits, although alterations of the mi- tochondria are observed at a morphological level (37). Nitrated proteins did not specifically accumulate in the aggregate fraction. In fact, only a small percentage of the proteins in the aggregates were NT-immunopositive. One explanation is that the modification renders the protein more polar and watersoluble, and only in selected cases, the protein may become poorly soluble and form aggregates (e.g. ␣-synuclein, whose nitro group extensively affects the protein structure, causing cross-linking and oligomerization) (31). It cannot be excluded, however, that even a small amount of insoluble nitrated proteins may serve as a seed for aggregation for other proteins. Interesting and original observations in this study were the spots of intense NT immunoreactivity in the perikarya of motor neurons at the early stage of degeneration, in presymptomatic SOD1 G93A mice. We have not identified the nitrated protein(s) present in these spots, but this calls for further investigation. However, whatever the composition of these nitrated microaggregates, they do not seem to cause massive accumulation and aggregation of nitrated proteins in the motor neurons at a later phase of degeneration. In fact, although there is a steep rise in NT immunoreactivity in the spinal cord sections at the advanced stages of the disease, the signal appears homogeneously distributed in the reactive astrocytes and in shrunken motor neurons, except for some scattered, intensely labeled, round formations reminiscent of axonal spheroids (38). In addition, the pattern of distribution of NT-immunoreactivity is different from that of ubiquitinated protein aggregates, suggesting that nitrated proteins do not accumulate in those aggregates (30). We therefore made a detailed proteomic analysis of the soluble fraction of spinal cord homogenates of 9-week-old SOD1 G93A and SOD1 WT mice. We identified 32 proteins nitrated in vivo under physiological or pathological conditions. These can be grouped in classes based on their recognized functions: chaperone, energy metabolism, GDP/GTP exchange regulator, cytoskeletal, antioxidant, and others (Table I). With the proteomic approach, a relative quantification of the level of specific nitration for individual proteins was made in ALS mice in comparison with healthy mice. Behavior was not similar for all of the proteins in different experiments. This cannot be explained simply as individual variability or experimental artifact. Other authors have observed a similar pattern and suggested that protein nitration is a dynamic process, where hypothetical denitrase removes the modification under certain conditions (15,16). In any case, a small group of proteins, possibly the most susceptible to oxidation, were substantially overnitrated: ␣and ␥-enolase, ATP synthase ␤ chain, heat shock cognate 71-kDa protein, and actin. Persistent alterations of the function of these proteins may have important implications for cell metabolism. Inhibition of function is the most widely reported consequence of protein Tyr nitration, but there are a few examples of a gain of function as well as no effect on function (10,39). On this basis, we could consider a potential role of nitration in ALS pathogenesis, possibly interfering with multiple pathways, as summarized in the scheme in Fig. 7. Enolase is a dimeric enzyme of the glycolytic pathway responsible for catalyzing the conversion of 2-phosphoglycerate to phosphoenolpyruvate. In neuronal tissues, the enzyme is present as a homodimer or heterodimer of ␣ and ␥ subunits. The ␥ subunit is mainly located in neurons, and the ␣ subunit is mainly located in glial cells. Increasing evidence links enolasedependent pathways to several pathologies, especially autoimmune and neurodegenerative disorders (40). Abnormally oxidized forms of ␣-enolase have been suggested as a cause of the impaired glucose metabolism observed in brains of Alzheimer disease patients (41,42). The ATP synthase (ATPase) ␤-chain subunit belongs to the F 1 catalytic core of the mitochondrial ATPase complex, which has a key role in energy production. ATPase, by complex rotational movements of its subunits, couples the proton gradient generated by the respiratory chain to the synthesis of ATP from ADP and inorganic phosphate. The nitro group may strongly influence the interactions between the subunits and impair the complex machinery of ATP production. This is compatible with reduced ATP synthesis in the spinal cord mitochondria of symptomatic SOD1 G93A mice (43). Heat shock cognate 71-kDa protein (HSC71) is a constitutively expressed member of the heat shock protein (HSP) 70 family. HSC71 is a molecular chaperone and facilitates the degradation by the proteasome of several proteins, including mutant SOD1, in a ubiquitination-dependent manner (44). Proteinaceous inclusions containing mutant SOD1 have been detected in ALS-transgenic mice and patients with sporadic and familial ALS. Neurons may be relatively deficient in their ability to induce certain HSPs upon stress, and up-regulation of protein chaperones protects cells from mutant SOD1 toxicity (45). Very recently, it has been shown that treatment with arimoclomol, a coinducer of HSPs, raised the levels of HSP70 and -90 in the spinal motor neurons of SOD1 G93A mice and significantly delayed disease progression (46). Actin is a major cytoskeletal protein in neurons, involved in many aspects of cell motility, vesicle transport, and membrane turnover. Its organization in filaments is fundamental for its function. Oxidation and in particular nitration of actin can alter actin polymerization (47). The motor neuron, with its extensive network of cell processes, needs a particularly efficient transport system to transfer cellular components synthesized in the cell body to their correct destination. In ALS mice, cytoskeletal abnormalities and slowing axonal transport have been observed before disease onset (48,49). A definite explanation of the association of nitration of specific proteins with the expression of a pathological phenotype calls for detailed functional studies. This remains a challenge, because conditions that favor protein nitration can readily induce the concomitant oxidation of other amino acids that might be crucial in protein function. As recently reviewed, protein nitration may function as a cell signaling mechanism (14); however, its significance and, in particular, its relationship of competition or cooperation with protein Tyr phosphorylation, still represent an open question (50). To address these problems properly, we must identify the sites of nitration and phosphorylation of proteins in vivo. However, only a limited number of proteins have been rigorously analyzed. We identified 16 sites of nitration in proteins in vivo by MALDI MS. Among the sites identified, it is interesting to note that Tyr 43 in ␣-enolase, reported to be susceptible to phosphorylation (51), was found to be nitrated. It would be worthwhile to investigate further how this influences ␣-enolase function. It is important to note that ␣-enolase, although abundantly expressed in most cells, is not a housekeeping gene (40). ␣-Enolase also has other, apparently unrelated functions, linked to its expression on the surface of a variety of eukaryotic cells (40). Mildly oxidized proteins are readily degraded by the proteasomal system, whereas severely oxidized proteins are poor substrates for proteases and may accumulate. Tyr nitration makes proteins more prone to proteasomal degradation (i.e. the rate of proteolytic cleavage mediated by the 20 S proteasome subunit is higher for nitrated proteins) (52)(53)(54). Protein nitration might indeed play a signaling role targeting the proteins for degradation and determine an overload of the proteasome system. Moreover, we and another group found a reduction of 20 S proteasome levels in the motor neurons of Tg SOD1 G93A mice (30,55) at the presymptomatic stage. This may contribute to increase the levels of nitrated proteins, which do not specifically accumulate in the aggregates, except perhaps for the spots intensely NT-immunolabeled showed in the motor neurons. In conclusion, several factors may contribute to the increasing oxidative and nitrative stress in ALS: mutated SOD1, excitotoxicity, mitochondrial dysfunction, etc. All of these may, on one hand, exacerbate existing mitochondrial alterations and, on the other, lead to protein tyrosine nitration, which in turn may cause motor neuron degeneration by multiple pathways: directly, affecting protein functions important for cell metabolism and catabolism, and/or indirectly, through signaling mechanisms (Fig. 7). Protein nitration may therefore be an important physiological regulator, and ALS pathology may occur when the system is excessively perturbed.
2018-04-03T01:28:28.886Z
2005-04-22T00:00:00.000
{ "year": 2005, "sha1": "a179bba8064c5702415256342b5fed6a7ff65117", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/280/16/16295.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "3194aac6364a1a9b36536e35da9b089350314648", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
119642943
pes2o/s2orc
v3-fos-license
On a problem \`a la Kummer-Vandiver for function fields We use Artin-Schreier base change to construct counterexamples to a Kummer-Vandiver type question for function fields. Introduction Let p be a prime number and let F be the maximal real subfield of Q(µ p ). The famous Kummer-Vandiver conjecture asserts that It has been verified for all p less than 163, 577, 856 [3]. However, heuristic arguments of Washington suggest that the number of counterexamples p up to X should grow as log log X, making it difficult to find either counterexamples or convincing numerical evidence towards the conjecture. The second author has recently proven a function field analogue of the Herbrand-Ribet theorem, and formulated a version of the Kummer-Vandiver conjecture in this context [6]. In this note, which complements [6], we construct counterexamples to this Kummer-Vandiver statement. Let us now recall the statement of this analogue. Let A = F q [T ] and let C be the Carlitz module over A. This is the A-module scheme over Spec A whose underlying group scheme is the additive group G a,A , and on which A acts via the F q -algebra homomorphism φ : A → End G a,A : T → T + τ where τ : G a,A → G a,A is the q-th power Frobenius endomorphism. It is an example of a Drinfeld module, and in many ways it is an analogue of the multiplicative group in characteristic zero [4]. Let P ∈ A be monic irreducible, let k = F q (T ) be the fraction field of A and let K/k be the extension obtained by adjoining the P -torsion points of C. Then K/k is Galois and there is a canonical isomorphism ω P : Gal(K/k) → (A/P ) × , 1 which one can think of as the mod P Teichmüller character. Let R be the integral closure of A in K and put Y = Y P = Spec R. Let C[P ] be the P -torsion subscheme of C and let C[P ] D be its Cartier dual. Consider the flat cohomology group This is an A/P -vector space on which the Galois group Gal(K/k) acts, so it decomposes in isotypical components as In [6], it is shown that for n in the range 1 ≤ n < q deg P − 1 which are divisible by q − 1 one has , where B(n) ∈ k is the n-th Bernoulli-Carlitz number. The Kummer-Vandiver problem can be stated as follows (see [6,Question 1]): The analogy with the classical Kummer-Vandiver conjecture is (implicitly) explained in [6, Remark 2]: using the Kummer sequence and flat duality it is shown that the classical Kummer-Vandiver conjecture is equivalent with the statement that where ℓ is an odd prime, ζ ℓ a primitive ℓ-th root of unity and χ ℓ denotes the mod ℓ cyclotomic character. In this paper we use Artin-Schreier change of variables and computer calculations to construct counterexamples to the above statement. For example, we use properties of the prime to show that the prime Note that 9840 = n−1 with n = (q deg Q −1)/2. The degree of the prime Q is too high to allow for a direct computation of H Q . In a forthcoming paper we compare the flat cohomology groups of [6] with the group of "units modulo circular units" introduced by Anderson [1], and show amongst other things that the Kummer-Vandiver problem of [6] is equivalent with Anderson's Kummer-Vandiver conjecture [1, §4.12]. In particular, the present counterexamples will also serve as counterexamples to Anderson's conjecture. Acknowledgements. The authors thank the referee for several suggestions and corrections that helped improve the paper. Notation 2.1. L-functions. Let F/E be a finite abelian extension of function fields of curves over F q . Assume that F q is algebraically closed in both E and F . Let χ : Gal(F/E) → C × be a homomorphism, and let E χ ⊂ F be the fixed field of ker χ. We set Here (−, E χ /E) denotes the global reciprocity map. Recall that L(X, E, χ) is a rational function and that if χ = 1 then L(X, E, χ) is a polynomial whose coefficients are algebraic integers. 2.2. The cyclotomic function fields. Let p be a prime number. Let F q be a finite field having q elements, q = p s , where p is the characteristic of F q . Let A = F q [T ] be the polynomial ring in one variable T and let k = F q (T ) be its field of fractions. We denote the set of monic elements in A by A + . For n ≥ 0, we denote the set of elements in A + of degree n by A n . We fix k, an algebraic closure of k. All finite extensions of k considered in this note are assumed to be contained in k. We denote the unique place of k which is a pole of T by ∞. Let P ∈ A be monic irreducible of degree d. We denote the P -th cyclotomic function field by K P (see [4], chapter 7). Recall that K P /k is the maximal abelian extension of k such that: (1) K P /k is unramified outside P and ∞, (2) K P /k is tamely ramified at P and ∞, . The Galois group Gal(K P /k) is canonically isomorphic with (A/P A) × and the subgroup F × q ⊂ (A/P A) × is both the inertia and the decomposition group of ∞ in K P /k. Cyclicity of divisor class groups 3.1. An Artin-Schreier extension and the function γ. Let i : A + → Z/pZ be the function that maps a polynomial Let θ ∈k be a root of X p − X = T . Then the extensionk obtained by adjoining θ to k is rational and we havek = F q (θ). The extension ramifies only at ∞. The integral closure of A ink, which we denote bỹ A, is the polynomial ring F q [θ] in θ. We have an isomorphism of groups Z/pZ → Gal(k/k) given by Let (−, k/k) be the Artin symbol for ideals, then for all a ∈ A + we have [2, Lemma 2.1] Let n be a positive integer. By [2, Lemma 3.2] for all m sufficiently large we have a∈Am i(a)a n = 0. We define γ(n) := m≥0 a∈Am i(a)a n ∈ A. Cyclotomic extensions of k and ofk. Now, we fix a prime Let K P be the P -th cyclotomic function field for A, with Galois group ∆ = (A/P A) × , and let K Q be the Q-th cyclotomic function field for A, with Galois group ∆ = ( A/Q A) × . By [2, Lemma 2.2] we have: Let L be the compositum of k and K P inside K Q . Then L is an abelian extension of k with Galois group Z/pZ × ∆. On the other hand, we can identify the Galois group of L over k with ∆, and obtain a surjective map ∆ → ∆, which is explicitly given by Let ω P : ∆ → W × and ω Q : ∆ → W × be the Teichmüller characters. We will denote by ω P the same character as ω P , but seen as a character on Gal(L/ k). In particular, we have where φ runs over all characters of Gal(k/k) = Z/pZ. Observe that, if φ = 1, then L(X, φω n P ) is a polynomial of degree d (apply [2], Lemma 2.3 for both A and A). Furthermore, we have: where ψ : Z/pZ → W × is the character that maps 1 to ζ p . 3.4. Congruences. Assume that n is not divisible by q − 1. Then the Bernoulli-Goss polynomial β(n) is defined as follows: (The inner sum vanishes for all sufficiently large m.) Proposition 1. Assume that n is not divisible by q − 1 and that p is odd. Then the following are equivalent: Proof. Using the congruence Since L(1, K P /k, ω n P ) ∈ W 0 and p is odd, it follows that L(1, L/k, ψω n P ) vanishes modulo (ζ p − 1) 2 if and only if both L(1, K P /k, ω n P ) ≡ 0 (mod p) and d m=0 a∈Am i(a)ω P (a) n ≡ 0 (mod ζ p − 1). The first congruence holds if and only if P divides β(n) and the second if and only if P divides γ(n). 3.5. Divisor class groups. Let E be a finite extension of k, with constant field F q n . We have an exact sequence where Div 0 E is the group of degree 0 divisors on E and Cl E the group of divisor classes of degree 0 of E. Since W is flat over Z this sequence remains exact after tensoring with W , and since F × q n has no p-torsion we obtain a short exact sequence Proposition 2. Let F/E be a finite Galois extension with Galois group G. Then there is a natural short exact sequence Proof. By Hilbert 90 we have H 1 (G, F × ) = 0, and since W is flat over Taking G-invariants in the sequence (4) for F gives a short exact sequence Comparing this with (4) for E gives the desired exact sequence. Corollary 1. Assume that n is not divisible by q − 1. Then and Proof. Since L/K P is unramified away from the primes above ∞, we have that (Div L) G / Div K P is generated by the primes above ∞. Let S be the set of primes of L above ∞ and W S the free W -module with basis S. Because n is not divisible by q − 1 we have hence the first claim follows from the Proposition. For the second claim, use that K Q /L is unramified away from Q and the primes above ∞, and that Q is totally ramified. Theorem 1. Assume p = 2. Let P ∈ A be monic irreducible of degree d, and such that i(P ) = 0. Let n be an integer such that 1 ≤ n ≤ q d −2, not divisible by q − 1 and such that β(n) and γ(n) are divisible by P . and assume that U is W -cyclic, but that β(n) and γ(n) are divisible by P . Since β(n) is divisible by P , it follows that U Gal(L/K P ) = C(K P )(ω −n P ) is nonzero, and in particular that U is nonzero. Let x ∈ U be a generator, so that U = W x. Let g be a generator of Gal(L/K P ). We have gx = wx for some w ∈ W × . This implies that w p x = x, and it follows that w p − 1 ≡ 0 (mod p) and w ≡ 1 (mod p). Since v p (1 + w + · · · + w p−1 ) = 1 we find pU = (1 + w + · · · + w p−1 )U ⊂ U Gal(L/K P ) and therefore the length of U/U Gal(L/K P ) is at most 1. On the other hand, by [5] and Corollary 1, we have that the length of U/U Gal(L/K P ) equals v p (L(1, L/k, ω P n )) − v p (L(1, K P /k, ω n P )) and by (2) this equals (p − 1)v p (L(1, L/k, ψω n P )). From Proposition 1 we deduce that the length of U/U Gal(L/K P ) is at least 2, a contradiction. Kummer-Vandiver If P ∈ A is monic irreducible we write Y P for the spectrum of the integral closure of A in K P . Theorem 2. Assume p = 2. Let P ∈ A be monic irreducible of degree d and such that i(P ) = 0. Let n be an integer such that (1) β(n) is divisible by P if n is not divisible by q − 1; (2) γ(n) is divisible by P . Let Q(T ) = P (T p − T ) and N = n(q pd − 1)/(q d − 1). Then Q is irreducible in A and Proof. We split the proof in cases depending on the divisibility of n and dn by q − 1. Note that N is divisible by q − 1 if and only if nd is divisible by q − 1. Case 1. Assume that n is divisible by q − 1. This case is treated in [2]. By [2, Proposition 2.6], we get Without loss of generality we may assume that 1 ≤ n < q d − 1. Then by the work of Okada ([7], see also [4, §8.20]): By Theorem 1 of [6] (the "Herbrand-Ribet theorem") we conclude Case 2. Now assume that n is not divisible by q − 1 but dn is. Then by Theorem 1 the module C(K Q )(ω Q −N ) is not cyclic, and so we must have: We conclude with the same argument as in case 1. Case 3. Assume that nd is not divisible by q − 1. As in the previous case, we find that Now we conclude we a different argument. By the above non vanishing, and by exact sequence (2) of [6] we find that the space of Cartierinvariant ω −N -typical differential forms is at least two-dimensional. With the exact sequence of Theorem 2 of loc. cit. one concludes that . Also, note that n is not divisible by q − 1. Using Theorem 2 we thus find that the prime where we have This is the counterexample to the analogue of the Kummer-Vandiver conjecture stated at the end of the introduction (we have −9842 ≡ 9840 (mod q pd − 1).) Characteristic p = 2 We now assume that p = 2. With some minor changes, the above arguments still work, but the result is weaker. We keep the notations of section 3. If P (T ) is a prime in A of degree d such that i(P ) = 0, then Q(θ) = P (T ) is a prime of degree 2d in A = F q [θ], where θ 2 − θ = T . Set again L = kK P ⊂ K Q . We have the following version of Theorem 1. Theorem 3. Assume p = 2. Let P ∈ A be monic irreducible of degree d, and such that i(P ) = 0. Let n be an integer such that 1 ≤ n ≤ q d − 2, not divisible by q − 1 and such that γ(n) is divisible by P and Note that β(n) is divisible by P if and only if L(1, K P /k, ω P ) is divisible by 2, so the hypothesis are stronger than those in Theorem 1. Proof of Theorem 3. The proof is almost identical to that of Theorem 1. Let ψ be the unique non-trivial character of G = Gal( k/k). Proposition 1 does no longer hold, since we no longer have that (ζ p − 1) 2 divides p. However, if γ(n) is divisible by P and if L(1, K P /k, ω P ) is divisible by 4 (instead of 2), we can still conclude v p (L(1, L/k, ψω n P )) ≥ 2. Denote the length of U = C(L)( ω −n P ) by N. We have N = v p (L(1, L/k, ψω n P )) + v p (L(1, L/K, ω n P )) ≥ 4. Let g be the nontrivial element of Gal(L/K P ). Then the length of U g equals v p (L(1, L/K, ω n P )), which by hypothesis is at least 2. Suppose that U is a cyclic W -module, and let x ∈ U be a generator, so that U = W x. There is a w ∈ W × so that gx = wx. We then have that w 2 − 1 is divisible by 2 N . We find that w − 1 is divisible by 2 N −1 but not by 2 N (since U g = U.) It follows that (1 + g)U = 2U, and as in the proof of Theorem 1 we conclude that U/U g has length at most 1, contradicting our hypothesis. We conclude that U cannot be W -cyclic. Using this, we get the following variation of Theorem 2 in characteristic 2, with the same proof. 5. Heuristics This section contains no mathematical theorems, but only crude heuristic arguments and numerical observations. Our main goal is to convince the reader that one could a priori expect to construct many counterexamples using the above base change strategy. The arguments are specific to odd q so we assume throughout the section that q is odd. We argue that one could expect that Theorem 2 yields at least cX 1/p (log X) −1 counter-examples of residue cardinality at most X to Question 1 (for some constant c > 0), which is much more than the log log X counter-examples predicted by Washington's heuristics. The reason to restrict to n of the form m(q d − 1)/(q − 1) lies in the following trivial observation: If n is a multiple of (q d − 1)/(q − 1) then β(n) and γ(n) modulo P lie inside F q ⊂ A/P . So we may expect that β(n) and γ(n) are much more likely to vanish modulo a prime P of degree d if n is of the form m(q d − 1)/(q − 1). Assume that q − 1 does not divide md. We make the following hypotheses on a "random" monic irreducible P of degree d: (1) i(P ) is non-zero with probability (p − 1)/p; (2) β(m(q d − 1)/(q − 1)) is zero modulo P with probability 1/q; (3) γ(m(q d − 1)/(q − 1)) is zero modulo P with probability 1/q; (4) the above probabilities are independent of each other, and independent of the vanishing of i(P ). The first hypothesis is essentially an instance of the Chebotarev density theorem, the second and the third are motivated by Lemma 1, and the fourth is nothing more than wishful thinking. To some extent one can verify these statements experimentally. In Table 1 we reproduce some numerical data regarding these hypotheses. Note that in the example of Table 1 β seems to have a slight bias towards vanishing, we have no explanation for this bias. Finally, we show that under the above hypothesis, for some c > 0 we find that for all X sufficiently large there are at least cX 1/p (log X) −1 primes of residue cardinality at most X that contradict Question 1. Indeed, for all X sufficiently large there is a positive integer d with (log q X 1/p ) − 2 < d ≤ log q X 1/p and d not divisible by q − 1. Taking m = 1, we should find ≥ c X 1/p log X monic irreducibles P of degree d satisfying the conditions (with c > 0 independent of X). Each of these leads to a counter-example Q of residue cardinality at most X. d β(n) ≡ 0 (P ) γ(n) ≡ 0 (P ) β(n) ≡ γ(n) ≡ 0 (P ) 9 428 318 142 11 395 344 137 13 416 332 147 Table 1. The number of P satisfying various congruences, out of random samples of 1000 primes P in F 3 [t] with i(P ) = 0, of degrees 9, 11 and 13. Again n = (q d − 1)/2. Note that every P counted in the rightmost column gives rise to a prime Q which gives a counterexample to Question 1.
2019-04-11T19:48:06.796Z
2011-10-03T00:00:00.000
{ "year": 2011, "sha1": "d10114b495048b8d9505b2ee00bdd33505373cf0", "oa_license": "elsevier-specific: oa user license", "oa_url": "https://doi.org/10.1016/j.jnt.2012.02.008", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "bd21ea4b671d5c472b7910cd9987fda07aa0e454", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
21931969
pes2o/s2orc
v3-fos-license
A new coronavirus-like particle associated with diarrhea in swine Summary Coronavirus-like particles were detected by electron microscopy in the intestinal contents of pigs during a diarrheal outbreak on 4 swine breeding farms. Diarrhea was reproduced in experimental pigs with one of the isolates, designated CV777, which was found to be distinct from the 2 known porcine coronaviruses, transmissible gastroenteritis virus and hemagglutinating encephalomyelitis virus. Summary Coronavirus-like particles were detected by electron microscopy in the intestinal contents of pigs during a diarrheal outbreak on 4 swine breeding farms. Diarrhea was reproduced in experimental pigs with one of the isolates, designated CV777, which was found to be distinct from the 2 known porcine coronaviruses, transmissible gastroenteritis virus and hemagglutinating encephalomyelitis virus. In 1946, Do Yr.E and HUTC~I~GS (2) described a viral diarrhea, in swine and called it transmissible gastroenteritis. Until recently, transmissible gastroenteritis virus was the only virus known to be specifically associated with diarrhea in swine of all ages. In 1976, following the discovery of rotaviruses in different, animal species, a porcine rotavirus was detected in the feces of pigs with diarrhea (14). Diarrhea could be reproduced experimentally in piglets with this virus. In a search for rotaviruses on Belgian swine breeding farms with diarrheal problems, a new coronavirus-like particle was detected b y electron microscopic examination of intestinal or fecal samples from sick pigs. The present report describes the morphology of this coronavirus-like particle, and shows that it is distinct from the known porcine coronaviruses and causes diarrhea. Up to now, the only coronaviruses isolated from swine have been transmissible gastroenteritis virus {TGEV) and hemagglutinating encephalomyelitis virus (HEV). TGEV has been described as a cause of diarrhea in swine in countries all over the world (13). Numerous studies have been performed on the v i r u s --a n i m a l interactions of TGEV, which is usually detected either by its isolation from fecal These studies were supported by the Institute for the Encouragement of l~esearch in Industry and Agriculture (IWONL), Brussels, Belgium. material in cell cultures or by immunoflu orescence in the smM1 intestinal epithelium of infected pigs (7,12). TGEV infections can also be diagnosed serologically. J:IEV was first described in Canada in 1962 as a cause of centrM nervous disorders in pigs (4). The same virus was later associated with a disease syndrome called vomiting and wasting disease in several European countries (1,8). The virus can easily be detected by cultivation in several porcine cell cultures (11). Both TGEV and HEV have been classified as coronaviruses mainly on the basis of their specific morphology (10). In 1977, a sudden outbreak of diarrhea was observed in swine of all ages on 4 Belgian swine breeding farms. The morbidity in sows was very variable and the animals recovered after a diarrhea which lasted 3 to 4 days. All the pigs showed a watery diarrhea. Death occurred up to the age of 7 days and the overalI mortality rate in these piglets was approximately 50 per cent (9). It decreased with increasing age. TGEV was suspected as the cause of this diarrhea. However, the direct immunofluorescence test for the diagnosis of TGEV, which is routinely applied on cryostat sections of the small intestine of sick pigs, was negative for these pigs. The absence of seroneutralizing antibodies to TGEV in the blood of sows collected 6 to 12 weeks after the outbreak confirmed that TGEV was not involved. In an attempt to arrive at an etiologie diagnosis, fecal material and intestinal contents from pigs of each farm were subsequently processed for examination in an electron microscope by negative staining. They were diluted t to 5 (v/v) in phosphate-buffered saline, pH 7.3 and clarified at 3000 × g, at 4 ° C, for 30 minutes. The supernatant was layered on top of a 20 per cent sucrose solution and centrifuged at 150,000 x g, at 4 ° C, for 40 minutes. The resulting pellet was resuspended in a few drops of distilled water, placed on 200 mesh formvar coated grids, and stained with 2 per cent K-phosphotungstate, pH 6.1. Grids were examined using a Zeiss EM 9 S-2 electron microscope at an acceleration voltage of 60 KV. Micrographs used for particle size measurement were taken at an instrumental magnification of 28,000 X, which were then photographically-enlarged to 84,000 X or 168,000 ×. gotavirus particles were not detected. However, eoronavirus-like particles were observed in specimens of pigs from each of the 4 breeding farms. One of the fecal samples containing these coronavirus-like particles was designated CV777 and was used for further studies. The etiologic relationship between the corona virus-like particles, CV777, and the occurrence of diarrhea was established by oral inoculation of a 20 per cent suspension of the fecal material contMning CV777 into a one day old colostrumdeprived pig. The experimental pig was killed 30 hours later, at the height of diarrhea, and a virus stock was prepared from an homogenate of its small intestine and contents. A bacteria free filtrate of the supernatant of a 20 per cent suspension of this material was used for inoculation of 12 colostrum-deprived-hysterectomy-derived piglets, kept i n isolation. Seven control pigs were used. The pigs were inoculated at the age of 3 to 15 days. All the inoculated pigs developed a watery diarrhea within 24 to 36 hours after inoculation whereas the control animals remained normal. Coronavirus-like particles were detected by electron microscopic examination in the watery feces or intestinal contents of each of the experimentally inoculated pigs. Such particles were not found in the feces of the same pigs prior to inoculation or in the fecal samples of the control animals. The particles, shown in Figure 1, had typicM coronavirus morphology. They were pleomorphic with a range in diameter of 95 to 190 nm, including the projections, which were approximately 18 nm in length. Most particles were between t30 and t70 nm in diameter. The projections formed a single fringe radiating from the core. They appeared to be club-shaped. Only the dilated distal ends of the projections were seen on the micrographs. The negative stain also appeared to settle on the surface of some particles and an electron opaque central area covered by surface projections was often seen (Fig. 1 a --a r r o w s and lc). No internal structure was observed. It was impossible to distinguish these coronavirus-like particles morphologically from TGEV or H E V particles from similar preparations. Other particles, different from these coronavirus-like particles, were also observed in the majority of the fecal samples. As seen in Figure 2, they were pleomorphic and very variable in size, ranging in diameter from 95 to 650 nm with an average diameter of 190 to 225 rim. They carried numerous short projections, of approximate length 9 nm, on their surfaces. Similar particles of unkno~m identity have been described in human and animal fecal samples (3,5,6). In the present studies such particles have also been found in the solid fecal samples of the control pigs. They appeared, therefore, not to be associated with diarrhea. Rotaviruses and other recognizable virus particles were not seen in control or experimentally inoculated pigs. As already mentioned, TGEV was eliminated as the cause of the diarrhea on the origina 1 farms. AdditionMly 9 out of the 12 experimentally inoculated pigs, killed at, the heigth of diarrhea, were negative for TGE viral antigens in their small intestinal epithelium by the direct immunofluorescenee test. Furthermore, the remaining 3 pigs, inoculated with CV777 at the age of 15 days, were allowed to recover after a diarrhea which lasted 4--5 days. A serum sample, collected from these pigs 3 weeks later, did not contain neutralizing antibodies against the cell culture adapted Purdue strain of TGEV. Fig. 2. One eoronavirus-like particle CV 777 (arrow) together with pleomorphie particles of unknown identity. Bar represents 100 nm The possibility that the CV777 particles consisted of H E V was less likely since the latter virus does not cause diarrhea in pigs. Cryostat sections of the sma~ intestine of experimentMly inoculated pigs were negative for fluorescence by the direct test using a conjugate directed against the VW572 isolate of H E V (8). Furthermore, the pigs that had been allowed to recover did not possess hemagglutination-inhibiting or seroneutralizing antibodies against this t t E V isolate. Preliminary attempts were made to cultivate the coronavirus-like particle, CV777, in primary pig kidney cell cultures and in secondary porcine thyroid cells. Four weekly blind passages were made. The cells were examined for cytopathic effect and hemadsorption with chicken red blood cells, and the cell culture fluids were examined for hemagglutination. No evidence of viral replication in the cell cultures was obtained. It is kno~n that the tIEV can easily be isolated in primary pig kidney cell cultures using the same criteria (8). The present data suggest that, as well as TGEV and HEV, another previously unrecognized coronavirus-like virus is prevalent in swine. The results indicate that diarrhea can be reproduced in experimental pigs with this virus and that it is associated with certain outbreaks of epizootic diarrhea on Belgian swine breeding farms. More details on the clinical disease in the field and on the results of the experim e n t a l infections will be reported later.
2017-07-29T02:08:52.610Z
2005-01-01T00:00:00.000
{ "year": 1978, "sha1": "76a7b0a301baf4436319df3e013755e6f69a80ca", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/BF01317606.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "76a7b0a301baf4436319df3e013755e6f69a80ca", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
248069667
pes2o/s2orc
v3-fos-license
Black Hole Shadows Constrain Extended Gravity The first images of black hole shadows open new possibilities to constrain modern extended gravity theories. We present the method of shadow background calculation for black hole solutions in the form of Taylor series where $g_{11} = - g_{00}^{-1}$. The method is extended to general non-rotating case $g_{11} \neq - g_{00}^{-1}$. The results of the analysis are compared with the predictions of General relativity taking into account the Event Horizon Telescope data. The results for the Horndesky model with the Gauss-Bonnet invariant, loop quantum gravity, Bumblebee model and Gauss-Bonnet gravity are in full agreement with the observations of M87*. In conformal gravity, large values of $m_2$ and $Q_s$ must be excluded. In STEGR $f(Q)$ gravity the observational limits on the parameter $\alpha$ are: $-0.025<\alpha<0.04$. For an alternative generalization of the Bumblebee model with the Schwarzschild approximation: $-0.3<l<0.45$. These results demonstrate the maximum one can achieve without taking into account of the rotation of a black hole. INTRODUCTION First spherically-symmetric solutions were discovered more than 100 years ago. The existence of real objects described by these spherically-symmetric and axially-symmetric metrics was proven by the observations not so long ago. The results on binary systems dynamics [1], gravitational wave astronomy [2,3], direct imaging of black hole (BH) shadows [39] are the most well-known examples. Currently the General Theory of Relativity (GR) reproduces the astronomical data with great accuracy. Meanwhile such problems as dark matter, dark energy, the early Universe evolution, quantum gravity, ... are waiting for a better theoretical basis. So, new extended gravity models are developed to explain these phenomena better. They are f (R) gravity [4], f (Q) gravity [26], scalar-tensor theories including the most general case with second order field equations: Horndesky theory [6][7][8][9], teleparallel models [10], gravity models with conformal symmetry [11,12], loop quantum gravity [13][14][15], scalar GaussвЂ"Bonnet gravity [16] and other approaches. It seems important to constrain these extended gravity theories. So, the achievements in black hole shadow imaging provide additional possibility for this. Let us briefly remind the key feathers of the models discussed further. We start from the Horndesky model [17]. It represents the most general case of scalar-tensor gravity producing second order field equations [18]. Horndesky model could model dark energy or dark matter. It seems to be more fundamental than pure Brance-Dicke model. After GW170817 Horndesky theory was severely limited. Now it is used in the form of DHOST (degenerated higher-order scalar-tensor) theories [19]. Further Horndesky theory often is combined with the Gauss-Bonnet invariant S GB = R αβγσ R αβγσ − 4R αβ R αβ + R 2 ( [9]) where R αβγσ , R αβ and R are curvature tensors and scalar. The next model under consideration is loop quantum gravity (LQG). LQG represents a perspective approach to construct a quantum theory of gravity. The key idea is the independence of other physical interactions and the application of the specific choice of parametric space. Theory functions form the closed algebra of operators allowing to construct a renormalizable theory. LQG allows to combine bounce and inflation stages and to reproduce the theory of early Universe [20]. Going further we mention gravity models with conformal symmetry [21]. Such a symmetry in the action gives perspectives to construct a renormalizable theory of gravity. Linear realizations have fourth order field equations. Now the community considers models with nonlinear symmetry realization [22][23][24]). These models are developed in a short time. They have a set of problems. For example, there is no inflation asymp-totic [24]. If these problems would solved they seems to be perspective (in addition to quantum gravity) in dark energy. The next example is Bumblebee model. This model extends the standard GR by a vector field. Under a suitable potential the Bumblebee vector field B µ acquires a nonvanishing vacuum expectation value. Such combination induces a spontaneous Lorentz symmetry breaking [25]. The discussed approach could form a "bridge" between the string theory at Planckian scales and GR solving GR problems in the high energy range. Next we discuss the Teleparralel Equivalent of General Relativity (TEGR). Here GR is considered with non-vanishing torsion and non-metricity. Therefore geometrical deformation causes gravitational field directly. TEGR allows to include additional degrees of freedom to describe GR non-solved phenomenons. We consider f (Q) gravity which is a symmetric TEGR (STEGR) where the non-metricity scalar Q is not equal to zero. [26]. The last model is Scalar Gauss-Bonnet gravity. This is a modified theory with actions including all possible quadratic curvature scalars [16]. The curvature scalars play the same role as in the previous case. Being a phenomenological asymptote of some geometry they could provide physical explanation of GR unresolved problems. From the other side the real physical equipment has the limited accuracy. Therefore each experimental result admits few alternative explanations caused by different theories [40]. At the first step usually the most simple model is for this chosen. Further additional data allows to narrow down the set of alternative explanations. So the shadow size value being the first one measured at observations could be applied for the additional estimation of the model predictions. Therefore we use the standard GR space-times (Schwarzschild, Kerr, ...) as basic approximations. Previously we discussed a shadow form and size, last stable orbit and strong gravitational lensing calculations when the third approximation in sphericallysymmetric space-time is taken into account. Such metrics represent the continuation of Reissner-Nordstrom space-time by the next expansion order relatively r −1 ( [28]. Further when the rotation had been included, the shape of the shadow became sufficient to test theories beyond Kerr-Newman space-time [29]. So to define the expansion coefficients one has to use the observational results of shadow size, last stable orbit and strong gravitational lensing. For example, when the third correction is under consideration two different probes are required to restore BH characteristics. For calculation of next expansion orders one has to increase the amount of probes. In this paper we discuss how to constrain some extended gravity models using modern data on BH images. Here it is necessary to point out that in new gravity models the activity firstly is concentrated on nonrotating BH solutions as more simple ones. Therefore we develop the formalism for spherically-symmetric space-time to extract maximum information in lighter case (as a first step of general study). Our consideration is extended to the case g 11 = −g −1 00 . It was shown [30] that the maximum variation of a shadow size for a rotating BH amounts about 5 − 7%. In the spin value is small the influence of a rotation on the shadow size can be neglected. The first limits on the BH shadow size in M87* observation were obtained in [30]. They are: 4.31M < D < 6.08M . The paper is organized as follows. Section 2 is devoted to the degenerated case g 11 = −g −1 00 , Section 3 extends the consideration to g 11 = −g −1 00 case, at Section 4 we discuss the examples for gravity models mentioned above and Section 5 contains our conclusions. SHADOW MODEL AT The general description of asymptotically flat static spherically-symmetric space-time in modified gravity represents the extension of Schwarzschild metric in the form: where A(r) and B(r) are metric functions depending upon radial coordinate r. The standard Schwarzschild metrics in Planckian units G = c = = 1 has the form where M is BH mass. Note that Schwarzschild metric, Reissner-Nordstrom one and further extensions represent the terms of Taylor expansion when r >> 2M . In this approach one considers the Schwarzschild metric as a first approximation. In this approximation one can describe the star's trajectories around central BH. Reissner-Nordstrom metric as next expansion order allows to describe the influence of electrical or tidal charges [31]. Appearance of a tidal charge sometimes drastically changes shadow properties [28,29,32,33]. We start from a degenerated case of "symmetrical" metric functions A(r) = B(r) −1 in Eq.(1). So the event horizon position is defined as A(r) = 0. When the solution of this equation is not unique one has to consider the external one. At the next step one restricts the series of A(r) by the third expansion order as: where Q is a tidal charge and C 3 is a Taylor expansion coefficient at r −3 order. To simplify the calculations one can normalize all the values on BH mass:r = r/M , q = Q/M 2 and c 3 = C 3 /M 3 , thereforer, q and c 3 become unit free. Hence the configuration space appears to be two dimensional and: The set of unstable photon orbits forms a photon sphere and, hence, defines the boundary of a BH shadow. Photons from a far distance source with sighting parameter b greater than critical value b ph pass outside the sphere and, further, reach the external observer. Other photons with b < b ph interact with the BH and form a spot at the image, i.e. BH shadow. Hence the visible shadow image from non-rotating (or slowly rotating) BH has a form of a disk. Its radius is defined by the critical sighting parameter (b ph = 3 √ 3M for Schwarzschild BH [34]). The form of a shadow image may be distorted by a strong gravitational lensing. Consider the optically thin accretion disk surrounding the compact object [35]. We follow [34] and modify his approach for the symmetric case A(r) = B(r) (Eq. (1)). So the radiation is emitted from the surface situated outside the horizon including the regions inside the photon sphere. Therefore the specific intensity I ν0 that could be measured (usually in ergs −1 cm −2 str −2 Hz −1 ) at visible photon frequency ν 0 and the position (X, Y ) (coordinates at image plane) on the sky sphere is equal to [34]: where ν e is emitted frequency, z = ν 0 /ν e is redshift, j(ν e ) is volume unit emitting potential of resting source, dl prop = −k α u α e dλ is the differential of length unit in the source frame, k µ is 4-speed of the photon, u µ e is 4-speed of the BH and λ is the affine parameter along the photon γ trajectory. Index γ means the integration along isotropic geodesics. Red-shift z is defined as [34]: where u µ 0 = (1, 0, 0, 0) is 4-speed of a distance observer. Considering the simple spherically-symmetric model of accretion we suppose that the gas freely falls in radial direction to the BH center with the following 4-speed: k µ =ẋ µ was calculated in [34], therefore, combining these results one obtains: where the sign "+(-)" denotes the moving from (to) BH. So the redshift transfers to [36]: For the shadow profile we suppose that the frequency of resting source is ν * , radiation is monochromatic and has the radial profile 1/r 2 [34]: where δ is Dirac delta-function. The differential of length unit in the resting frame is defined as: Integrating the Eq. (5) over all the observed frequencies one obtains the observable intensity of photons at the position (X, Y ) on sky sphere [34]: The value of sighting parameter b depends upon the position at the image plane (X, Y ) and is equal to b 2 ∝ X 2 + Y 2 . After the numerical integration we obtain the intensity profile of BH shadow. The dependence of shadow size upon q and c 3 was calculated earlier [28]. It was shown that if the shadow size is greater than 4M only one additional degree of freedom (namely q) is necessary. Hence in the first expansion order such a shadow can be parametrized by the Reissner-Nordstrom metric. Further when the intensity profile starts to change one has to incorporate next perturbation orders. When the third and further expansions are taken into account the shadow description appears to be not unique. It allows a set of different parameter combinations because the increasing of equation's order causes the appearing of addition solutions. Hence more observational data would require to constrain the theoretical model. So in addition to the shadow size one has to consider the last stable orbit radius, strong gravitational lensing of the bright object close to BH and the distribution of background intensity. As previously the consideration starts from the Schwarzschild space-time. The figure 1 demonstrates the intensity of the BH shadow profile with q = 0.2519, c 3 = −0.7515. The key moment is that its size is equal to the Schwarzschild BH one. The difference from the Schwarzschild BH (normalized on maximal intensity) is: I max ≈ 0.6 ( Fig. 2). As one can see from Fig. 2, this difference grows when additional parameters increase. The maximal difference takes place near the shadow boundary, then it vanishes while going to the infinity. The difference inside shadow is constant. Further, from Fig. 2 one concludes that the intensity resolution greater than 0.1% of maximal intensity is required to fix this difference in observations. Note that each point of the profile could serve as an addition probe of BH potential. SHADOW MODEL AT A(r) = B −1 (r). In general spherically-symmetric space-time A(r) = B −1 (r) (Eq. (1)). To extend our consideration for this case we start from equations of motion in the form: dr dτ where E is photon energy, L is the angular momentum of the photon beam and τ is affine parameter. After substitution Eq. (14) to Eq. (13) these equations transform to: where D = L/E is sighting parameter of the photon beam. Analogously to the symmetric case the shadow end occurs when the photon trajectory becomes unstable. Therefore the corresponding equation is: To calculate the shadow size one has to find the maximal root of Eqs (16). We proceed this numerically. Horndesky Theory We consider the BH solution in Horndesky theory linearly coupled with Gauss-Bonnet invariant [9]. So the metric functions from Eq. where C 7 is the specific combination of model constants. Only positive values of C 7 were considered in [9]. Otherwise the requirement to the position of the horizon A(r h ) = 0 to be located outside the surface B(r) = 0 (to avoid singularities) fulfills only if C 7 < 0. So when C 7 > 0 this object is not a BH. Hence it is reasonable to suppose that the metric (17) is valid outside the photon sphere. Near the horizon a more accurate expansion (compatible with the BH definition) is required. The numerical results on shadow size dependence upon metric functions from Eq. (17) are presented at Fig. 3. The shadow size for metric functions (17) differs from corresponding Schwarzschild values less than 0.01% (at |C 7 | < 0.5). This occurs even when C 7 value is compatible with M . Hence the observational data compatible with GR also does not forbid Horndesky theory. Loop Quantum Gravity As the next application we discuss LQG with the modified Hayward metric [14,15] as BH solution. This metrics has no central singularity. That is why it is called as "regular BH". Its extension includes a time delay and the 1-loop quantum correction. So the metric functions in this case are: (20) where l encodes the central energy density 3/8πl 2 , the constant α is the time delay between the center and infinity and β is related to the 1-loop quantum corrections of the Newtonian potential. These parameters were constrained in [14,15] as: 0 ≤ α < 1, β max = 41/(10π). When l > 16/27M the object has no horizon. After solving the Eqs. (17) one obtains the dependencies (presented at Fig. 4) of the shadow size value upon l, α and β. One can see that during increasing of l the shadow size decreases. In opposite the increasing of α and β leads to shadow size increasing. When β ≥ 0 the minimal shadow size is reached at l = 16/27M , β = α = 0 being equal to 4.92M . The maximal shadow size occurs when l = 0, β = 41/(10π), α = 1 and is equal to 5.32M . Note that the shadows of such size could be described also by the Reissner-Nordstrom space-time. So using the shadow size only it is impossible to extract all parameters without additional observational data. Conformal Gravity The next example is gravity model with conformal symmetry. As an example we choose BH metric in new massive conformal gravity [12]: where Q s is a scalar charge and m 2 is a massive spin-2 mode. This asymptote occurs far from the horizon. As the key point is to account the transfer of photons to photon orbit this asymptote appears to be applicable. Fig. 5 shows the dependence of the shadow size against scalar charge Q s for different values of m 2 . The ends of the lines correspond to the effect described in [28,32]: the big values of Q s and 1/m 2 cause the absence of photon sphere. Decreasing of m 2 value causes the reduction of the shadow size. So additional observational data is required to constrain the model parameters. Limitations from [30] exclude only large values of Q s and m 2 ( Fig. 5). Bumblebee model The spherically symmetric solution in Bumblebee model has the form [25]: . Black line corresponds to m2 → ∞, red one corresponds to m2 = 2, blue one corresponds to m2 = 1, green one corresponds to m2 = 0.707, orange one corresponds to m2 = 0.577, purple one corresponds to m2 = 0.5. where l = ξb 2 , ξ is the real coupling constant (with mass dimension equal to -1) which controls the nonminimal gravity-bumblebee interaction, b 2 = B µ B µ . Calculations show that the size of a shadow does not depend upon the parameter l because (1 + l) can be put out of brackets and neglected. This effect occurs for all metrics where: l is a constant,B(r) andĀ(r) are the functions that can be presented as Tailor series. The alternative generalization can be established as: The Schwarzschild metric can be used as the first approximation forB(r) andĀ(r) to examine such generalization: The influence of the parameter l on the size of the shadow is presented in Fig. 6. For the other approximation ofB(r) andĀ(r) the dependence has the same form. Setting the limits from M87 observation [30] on Schwarzschild approximation one obtains that −0.3 < l < 0.45. f (Q) Gravity f (Q) gravity is a STEGR with non-metricity scalar Q [26]. We chose (I + ) approximate solutions beyond GR from constraints on the connection [?] in the form: where a is a expansion parameter, c 1 is the integration constant, M ren is the re-normalized mass. Note that for a distant observer there is no difference between re-normalized and Schwarzschild masses. So we continue in M ren units. The influence of a on the shadow size is shown in Fig. 7. We set the limits from M87 observations [30] as: −0.025 < α < 0.04 D Fig. 7. The dependence of the shadow size D upon parameterα in f (Q) gravity in Mren units. Scalar Gauss-Bonnet gravity The static spherically symmetric solution in scalar Gauss-Bonnet gravity was obtained analytically in [16] and has the form: where ζ is the coupling parameter. In [38] dependencies of radius of photon sphere and shadow radius upon ζ are obtained in the first order relatively the coupling constant: We obtained numerically the solution with higher accuracy (Fig. 8). There is no photon sphere when ζ > 0.3. Analogously to [28,32] this means that such an object has no shadow. DISCUSSION AND CONCLUSIONS The resolution of the first BH images from Event Horizon Telescope was approximately about half of the object's size [39]. Further improving of ground-based equipment could increase the resolution only for few times (not orders!). In addition the maximally possible size of the ground based network from radiotelescopes is already reached. As it was demonstrated earlier in [28,29] and developed in previous sections the constraining of real extended gravity models requires few orders of the accuracy improving (not times!). Therefore the next step could be an orbiting telescope network. Moreover the measurement of the shadow size without additional data would be enough only for the models based on Reissner-Nordstrom metric. For theories with a more complicated BH space-time the amount of observational points must be increased. In a case of considering of the last stable orbit, strong gravitational lensing of bright stars [28] and the distribution of shadow background intensity (as we showed above) the minimal resolution appears to be equal to 0.001 of shadow size. This is the maximal assumption valid if the additional coefficients are compatible with the BH mass M . Therefore the study of the shadow from rapidly rotating object seems to be more perspective. With the same values of additional coefficients the necessary resolution would be about 0.01 of shadow size [29]. So, there is an additional reason to develop the Kerr-like BH shadows theory to explore the extended gravity models in astrophysics. We also calculate the dependencies of shadow size upon the model parameters in different extended gravity theories and set the limits on them using M87 BH observations. The results in Horndesky model with Gauss-Bonnet invariant, LQG, Bumbelby and Gauss-Bonnet scalar models lie in complete agreement with the M87* observations. For most of considered examples the model predictions are not pass the boundary established by the existing observational data. In addition in conformal gravity big values of m 2 and Q s must be excluded (for example if m 2 = 2 then Q s < 0.9). In STEGR f (Q) gravity M87 observations constrain α as −0.025 < α < 0.04. In alternative Bumblebee generalization with Schwarzschild approximation one obtains that −0.3 < l < 0.45. These results demonstrates the maximum that could be distinguished when a BH rotation is not taken into account. Finally the approach without taking into account a BH rotation is valid only when a BHs rotation speed is small and can be neglected. When the rotation would be included into the consideration the amount of probes necessary to distinguish between different gravity models increases. "As a compensation" the requirements to the observational accuracy decrease.
2021-07-05T01:15:49.192Z
2021-07-02T00:00:00.000
{ "year": 2021, "sha1": "2b13ba7d51d657a9e48b529a78f7f88c87b9bbba", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2107.01115", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "4ae67903e1df4250e48dff4286f0b14cf6a4c729", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
7555333
pes2o/s2orc
v3-fos-license
Self-sustained current oscillations in the kinetic theory of semiconductor superlattices We present the first numerical solutions of a kinetic theory description of self-sustained current oscillations in n-doped semiconductor superlattices. The governing equation is a single-miniband Boltzmann-Poisson transport equation with a BGK (Bhatnagar-Gross-Krook) collision term. Appropriate boundary conditions for the distribution function describe electron injection in the contact regions. These conditions seamlessly become Ohm's law at the injecting contact and the zero charge boundary condition at the receiving contact when integrated over the wave vector. The time-dependent model is numerically solved for the distribution function by using the deterministic Weighted Particle Method. Numerical simulations are used to ascertain the convergence of the method. The numerical results confirm the validity of the Chapman-Enskog perturbation method used previously to derive generalized drift-diffusion equations for high electric fields because they agree very well with numerical solutions thereof. Introduction When non-interacting electrons in the conduction band of a material are subject to a constant electric field E, their positions should oscillate with a frequency proportional to the electric field, ω B = eEl/ , where −e < 0, and l are the charge of the electron, the Planck constant and the crystal period. These coherent Bloch oscillations (BO) and the associated current were predicted by Zener in 1934 [21]. Scattering limits the observability of BO: to observe them, their period should be smaller than the scattering time τ , so that E > /(elτ ). In standard materials, the fields required to observe BO are too large and therefore damped Bloch oscillations were not found until 1992 in experiments with undoped semiconductor superlattices [11], which have much larger periods than natural crystals. Semiconductor superlattices are artificial one-dimensional crystals formed by epitaxial growth of layers belonging to two different semiconductors that have similar lattice constants [4]. They were synthesized following Esaki and Tsu's idea that these artificial crystals would be useful to realize BO or related high frequency oscillations [8]. The difference in the energy gaps of the component semiconductors makes the conduction band of the superlattice to be a periodic succession of barriers and wells with typical periods of several nanometers. The electronic spectrum of a superlattice (SL) consists of a succession of minibands and minigaps generated by its periodicity. Tayloring the size of barriers and wells and the negative doping density of the latter, it is possible to achieve SLs with wide minibands and to populate only the lowest one. Electrons moving in this miniband have energies that are periodic functions of their wave numbers and are scattered by phonons, impurities and other electrons. When an appropriate dc voltage is held between the ends of one such SL with finitely many periods, it is possible to obtain high-frequency self-sustained oscillations of the current through the structure [4]. These oscillations are caused by repeated formation of electric field pulses at the injecting contact of the SL that move forward and disappear at the receiving contact. Thus they are transit-time oscillations whose frequency is inversely proportional to the SL length: they are similar to the Gunn effect in bulk semiconductors [18] and are different from BO. These Gunn-type oscillations have been observed in experiments with GaAs/AlAs SL (and with other SL based on III-V semiconductors) since 1996 and are the basis of fast oscillator devices [13]. The connection between the existence of Gunn-type oscillations and the suppression of Bloch oscillations is not yet well understood despite theoretical and experimental efforts [4]. Although mathematical models at the level of semiclassical kinetic theory go back to the 1970s [19], their analysis has been based on simplified reduced ordinary differential equations [14,15] which typically ignore space charge effects. Electron transport in a single miniband SL can be described by a kinetic equation coupled to a Poisson equation approximately describing the electric potential due to the other electrons [3]. A simple kinetic equation [14] contains an energy-dissipating collision term of Bhatnagar-Gross-Krook (BGK) type [1] and a simple energy-conserving (but momentum-dissipating) collision term. This model does not include coupling to the Poisson equation. An important point is that the dispersion relation between miniband energy and momentum is periodic because this periodicity gives rise to a relation between electron drift velocity and electric field which has a maximum value [4]. Then the drift velocity decreases as the field increases for large field values (negative differential mobility) and this in turn causes the Gunn-type self-sustained current oscillations (SSCO) for appropriate bias and contact boundary conditions [4]. These features are absent in the more usual Boltzmann-Poisson systems with parabolic band dispersion relations. Recently, Bonilla et al [3] have derived a nonlinear drift-diffusion equation from the KSS-BGK kinetic model coupled to the Poisson equation, which we will call the BGK-Poisson system. They use a Chapman-Enskog perturbation method in a particular limit in which the collision terms are of the same order as the term containing the electric field and dominate all other terms in the kinetic equation. Then stable SSCO are obtained by numerically solving the drift-diffusion equation with appropriate boundary and initial conditions [3]. However, no one has solved numerically the kinetic equation directly and shown that self-oscillations are among its solutions or studied the relation between these solutions and those of the limiting drift-diffusion equation. These are the problems tackled in the present paper and solving them could be a step in more precise studies of stable current oscillations in superlattices and other low dimensional solid state systems. We solve the BGK-Poisson kinetic equation model by means of a deterministic weighted particle method that has been used in the past to solve Boltzmann equations with non-periodic energy band dispersion relations [20]. Particle methods (see a recent one in [10]) are appropriate to study our system of equations because their solutions may present large gradients: the electric field pulses obtained by simulating the approximate drift-diffusion equations have a smooth leading front but a steep trailing back front [4]. The present work paves the way to numerically solving interesting problems in nanoelectronics and spintronics that are described by related quantum kinetic equations with more than one miniband [2]. The Model Our model for electron transport in a single miniband SL is a Boltzmann-Poisson system with BGK collision term [1] plus appropriate boundary and initial conditions. The governing equations are: with x ∈ [0, L] and f periodic in k with period 2π/l. Here l, L = Nl, N, ǫ, f , n, N D , k B , T , V , −F , m * and −e < 0 are the SL period, the SL length, the number of SL periods, the dielectric constant, the one-particle distribution function, the 2D electron density, the 2D doping density, the Boltzmann constant, the lattice temperature, the electric potential, the electric field, the effective mass of the electron, and the electron charge, respectively. We shall describe boundary and initial conditions later. The first term in the right hand side of Eq. (1) represents energy relaxation towards a 1D effective Fermi-Dirac distribution f F D (k; µ(n)) [3] (local equilibrium) due to e.g. phonon scattering. ν en is the collision frequency, taken as constant for simplicity. Here, µ(n) is the chemical potential that is a function of n resulting from solving equation (3) when (4) is substituted in it. A similar BGK model with a Boltzmann local distribution function was proposed by Ignatov and Shashkin [14,15]. The second term in the right hand side of Eq. (1) accounts for impurity elastic collisions with the constant collision frequency ν imp , which conserve energy but dissipate momentum [19,3,4]. Transfer of lateral momentum due to impurity scattering [12] is ignored in this model. We assume the simple tight-binding miniband dispersion relation, where ∆ is the miniband width. The exact and Fermi-Dirac distribution functions, f and f F D , have the same electron density n, according to (3). The latter equation is solved for the chemical potential µ in terms of n, which yields the function µ(n). When (1) is integrated over k, we obtain the charge continuity equation, where J n is the electron current density. Voltage bias condition Using the Poisson equation (2) to eliminate n, we obtain the following form of Ampère's law: where J(t) is the total current density. The total current density can be obtained from the voltage bias condition: where Φ(t)L is the voltage between the two contacts at the end of the SL and Φ(t) is an average field. For dc voltage bias, Φ(t) is a fixed constant φ. If we integrate (8) over x and use (9), we obtain Boundary conditions The boundary conditions give the distribution function f on the contacts at x = 0 and x = L through the distribution function inside the semiconductor. For fixed |k|, there are two possible characteristic curves at a point (x, t): one for k > 0 and another one for k < 0. With k < 0 the characteristic curve for x → 0+ and t > 0 is given by the initial condition whereas it is given by the distribution function at the contact (x = 0) if k > 0. Then, for x = 0 we need to specify the distribution function at the contact for k > 0, f + , whereas for x = L we need to specify the distribution function at the contact for k < 0, f − . Instead of inventing a theory for injecting and collecting contacts, we use a top-down approach proposed in Ref. [4]: we know that the following boundary conditions appropriately describe current self-oscillations in the drift-diffusion equation for the electric field, where σ > 0 is the constant contact conductivity and the left hand side of Eq. (11) is the electron current density. We will use boundary conditions for f such that they become (11) and (12) when we integrate them according to the definitions (3) and (7) of n and J n respectively: for x = 0, and for x = L. Note that the integral over k of (13) times e v(k)/(2π) yields (11) and the integral over k of (14) times l/(2π) yields (12). In these equations, f (0) is the leading order approximation for the distribution function in the Chapman-Enskog method [3,4]: where Eq. (15) is the solution of (1) when we drop the x and t derivatives of f (see [3]). If we use the electric potential V instead of the field F = ∂V /∂x (recall that the true electric field is −F ), the following boundary conditions for V are compatible with (9): Initial condition We select (15) as our initial condition for the distribution function. The initial electric field is assumed to be constant, F (x, 0) = φ, where φ is the average field. If we start from other initial conditions, the evolution of the current and other magnitudes are similar to those presented here after about 0.3 ps. Recapitulating, the equations governing our model are (1) -(4) for the unknowns f and V with initial condition (15) and boundary conditions (13), (14) and (16). If we use the field F instead of the electric potential V , the voltage bias condition (9) for F replaces (16). Nondimensional equations We use the scales defined in Table 1 to nondimensionalize the Boltzmann-BGK-Poisson kinetic equations. These scales are based on the hyperbolic scaling explained in Ref. Table 1 : Hyperbolic scaling. where M verifies Numerical values for these parameters will be given in Section 5. Equations (1) -(4) have the following nondimensional form The dimensionless boundary conditions are, for x a = 0: and for x a = L/x 0 : The boundary conditions for the electric potential V a are 8 The dimensionless initial condition is and f a periodic in k a with period 2π. Besides the electron current density, J n , it is convenient to calculate the average energy E (and its nondimensional version, E a ), defined as E a = E/(k B T ): From now on we drop the superscript a. The Deterministic Weighted Particle Method The most widely used numerical method used for solving Boltzmann equations is the Monte-Carlo Method [17]. This stochastic method yields data with a lot of numerical noise. The deterministic Weighted Particle Method (WPM) is an interesting alternative because it yields the distribution function (and therefore its moments: electron density, average energy and current density) at each time during the transient regimes with much less noise than the Monte Carlo simulation; cf. [20,7,5] (a numerical analysis of WPM can be found in [6] and in [16] for the special case of the BGK equation of gas dynamics). The WPM relies on a particle description of the distribution function, which means that f (x, k, t) is written as a sum of delta functions where ω i , f i (t), x i (t) and k i (t) are, respectively, the (constant) control volume, the weight, the position and the wave vector of the ith particle. N is the number of numerical particles. In the WPM, the motion of particles is governed by collisionless dynamics, whereas the collisions are accounted for by the variation of weights. Large gradients in the solution profile arise from appropriate particles acquiring large weights, not by accumulating many particles in the large gradient regions. The evolution of the particles is determined by their positions and wave vectors which are the characteristic curves of the convective part of the equation. Their equations are: , t) denotes the electric field at the instantaneous position of the i-th particle. The evolution of the weight f i (t) is given by the ordinary differential equation: (t) the Fermi-Dirac distribution evaluated for the i-th particle. The system of ordinary differential equations (26) -(27) is now solved by using a modified (semi-implicit) Euler method: For stability reasons, we use k n i to update x n i . The standard Euler method would use k n−1 i to update x n i but this would require using unpractically small time steps to have a stable scheme. The same problem appears when we employ explicit Runge-Kutta or multi-step methods. To select the initial positions and wave vectors in the modified Euler method, we build a grid in the domain The boundary conditions are taken into account as follows: • If k n i > π, we set k n i = k n i − 2π. If k n i < −π, we set k n i = k n i + 2π. • If x n i > L, we set x n i = x n i − L and f n−1 Here f + i and f − i are calculated by discretization of the integrals in (21) and (22) using the composite Simpson's rule on an equally spaced mesh K m ′ with step ∆k. To calculate x i , k i and f i at the next time step t n+1 , we need to update the electric field and the Fermi-Dirac distribution in the equations for the particles. According to Eqs. (2) and (3), this updating requires an interpolation procedure to generate an approximation of the distribution function on a regular mesh X m , K m ′ which is then used to approximate the electric field and the chemical potential. To approximate the values of the distribution function over the mesh, f n m,m ′ , we use the following weighted mean of its values for the particles, f n i : and ∆x and ∆k are the spatial and wave vector steps. An approximation for the density (19) and average energy (25) at the mesh points, n (X m , t n ) ≈ n n m and (k B T ) −1 E (X m , t n ) ≈ (k B T ) −1 E n m , are obtained using the composite Simpson's rule and the interpolated values of the distribution function on the mesh. The initial guess for µ is obtained by plotting g(µ) and selecting a value near its zero. g(µ) and g ′ (µ) are evaluated using the composite Simpson's rule. Once we know the chemical potential µ, Eq. (20) provides the Fermi-Dirac distribution function at mesh points, f F D (K m ′ ; µ(n n m )), which is then interpolated to get the Fermi-Dirac weight function for the particles, f F D,n i : To compute the electric field at time t n , we use finite differences to discretize the Poisson equation on the grid X m : Here V (0, t n ) = 0 and V (L, t n ) = φL as indicated by (23). V n m and F n m denote our approximations of V (X m , t n ) and F (X m , t n ) on the equally spaced mesh X m . Finally, the electric field is interpolated at the location of the particle i provided the particle i is in [X m , X m+1 ]. The total current density J is given by Eq. (10), whose nondimensional version is in which We use the composite Simpson rule to approximate J(t n ). Summarizing, at each time step t n : (1) Calculate the boundary conditions (21) and (22) with data at time t n−1 . (2) Compute f n i , k n i and x n i according to (28), (29) and (30), respectively, by using their values at t n−1 . (3) Evaluate the distribution function f n m,m ′ at the mesh points (X m , K m ′ ) by the weighted mean (31). (4) Compute the electron density (19) and nondimensional average energy (25) at the mesh points. We have observed that the costlier processes are 2 (computation of f n i using (28)) and 6 (computation of the Fermi-Dirac weight function f F D,n i ): these two processes take about 50% of the overall computation time and they are equally costly. After these processes, 3, 5 and 7 have the largest computational cost (each takes between 10% and 19% of the overall computation time). Numerical results We have used the parameter values of [9]. Numerical solutions of the nonlinear drift-diffusion equation derived from the Boltzmann-BGK model show that there is a stable stationary state for voltage bias below a certain threshold. Above this critical voltage, stable self-sustained oscillations of the current appear. These oscillations are due to the periodic generation of electric field pulses at the injecting contact and their motion towards the receiving contact. We have observed the same phenomena in our numerical solutions of the Boltzmann-BGK kinetic equations. Firstly, we present a typical case of selfsustained current oscillations accompanied by the motion and recycling of an electric field dipole wave, corresponding to a 157-period 3.64 nm GaAs/0.93 nm AlAs SL at 14 K, with ∆ = 72 meV, N D = 4.57×10 10 cm −2 , ν imp = 2ν en = 18×10 12 Hz and dimensionless dc average field φ = 1 [9]. The constant conductivity is 2.5 Ω cm −1 and the effective mass is m * = (0.067d W + 0.15d B )m 0 /l, where m 0 = 9.109534 × 10 −31 kg is the electron rest mass. Using these numer- For these parameter values, we consider 140800 particles and a mesh of 440 grid points for x and 80 points for k. The time step (dt) is 0.002 ps. Figure 1 shows the self-oscillations of the current, and Figure 2 the corresponding electric field pulse at different times. We observe how the electric field pulses are periodically created at the injecting contact x = 0, move to the end of the SL and disappear at the receiving contact. In Fig. 2, we have depicted the field profiles at the times marked (a) -(e) in Fig.1. We observe that the total current density reaches its maximum value when the electric field pulse is about to disappear at the collector. The electric field as a function of time and position is shown in Figure 3, both during one oscillation period in Fig. 3(a) and during several periods in Fig. 3(b). The ratio from the maximum to the minimum current in Fig.1 is 2.6 whereas the same ratio calculated by solving the drift-diffusion equation derived in [9] is 2.1 (cf. dashed line in Fig. 1(a) [9]). Measured in units of t 0 (which has a different numerical value in [9]), the oscillation period is 104.3 in Fig. 1 whereas the drift-diffusion equation yields 113.8. Comparing Fig. 1(b) of [9] with our Fig. 2, we find that at the time (c) the solution of the BGK-Poisson equation produces a pulse far from the contacts which is 11 x 0 wide and 7 F M tall whereas the drift-diffusion equation yields a similar pulse which is 10.7 x 0 wide and 6.8 F M tall (cf. dashed line in Fig. 1(b) of [9]). Thus the agreement between the simulation of the BGK-Poisson system and that of the drift-diffusion equation is very good considering the approximations made in the derivation of the latter from the former. Figure 4 shows the dimensionless electron density. We see the profile during several times belonging to one oscillation period as a function of position in Fig. 4(a) and as a function of the time and position in Fig. 4(b). The electron density profile corresponding to an electric field pulse is that of a traveling dipole wave such that n > 1 behind the peak of the electric field and 0 < n < 1 ahead of the peak. Comparison with Fig. 3(a) shows that the local maximum of the electron density is reached somewhat later than the peak of the electric field pulse. of distance at different instants of one oscillation period. The average energy profile is pulse-like. Its local maximum is always quite close to the peak of the electric field during each oscillation period. Fig. 5(b) shows the average energy profile as a function of position and time during one oscillation period. In Figure 6, we have depicted snapshots of the distribution function f (x, k, t) for different times as marked in Fig. 1 (30 ps, 36 ps, 42 ps, 48 ps, 54 ps, 60 ps) during one period of the self-oscillations. The structure of the distribution function is shown more clearly in the density plots depicted in Fig. 7 for the same times. The electron density profiles at the these times are shown in Figure 8. We observe that the distribution function has a local maximum at location of the peak of electron density. Similarly, f and n have local minima at the same positions. The distribution function has a local maximum at a positive k (cf. Fig. 7), and this situation persists from the initial time onwards; cf. Figure 9. Fig. 3(a), we observe that the particle positions oscillate with very small amplitudes when the electric field has a local maximum at their locations and these amplitudes become larger once the pulse has surpassed the particles. In contrast with these great changes in oscillation amplitude of the particle positions, the wave vectors of the particles oscillate with almost constant amplitudes, as shown in Fig. 10(b), 11 and 12. Since the evolution of the particle wave vector is more regular than the evolution of the particle position, we can save mesh points on the wave vectors. Recalling that the wave vector is a periodic variable, its boundary condition is as follows: when one particle goes out of the domain at k = π, it is reintroduced at k = −π. This condition can be readily observed in Figures 11 and 12. Fig. 13(a), we need more particles for the method to converge whereas in the second case, Fig. 13 Lastly, Figure 16 shows the evolution of the total current density for different time steps in simulations with 90000 particles and 260 mesh grid points for the position and 80 for the wave vector. We observe that our results are similar for time steps dt = 8 × 10 −4 ps and smaller. Figures 14 to 16 show that the shape of J(t) is similar for different mesh points and time steps: the device behavior is qualitatively correct even if we take fewer mesh points or larger time steps than needed to attain a numerically precise current vs time graph. Smaller M k , M x and larger dt result in slightly smaller oscillation periods and slightly larger oscillation amplitudes. Our numerical simulations have been carried out using a Matlab code in a computer with a Genuine Intel(R) CPU T2050 @ 1.60GHz processor with a 1595 MHz speed. Several computation times for time steps dt of 0.008 and 0.002 ps and 10000 time steps are shown in Table 2. Clearly, the time the computer takes to calculate one time step dt decreases as the number of par-ticles, M x or M k decrease. Except for the last row in Table 2, all rows satisfy N/(M x M k ) ≥ 2.25, and the corresponding particle numbers and x and k mesh points produce accurate enough results. Conclusion We have proposed a deterministic weighted particle method to numerically solve for the first time the semiclassical Boltzmann-BGK-Poisson system of equations with periodic miniband energy dispersion relation. This system describes vertical electron transport in a GaAs/AlAs superlattice under dc voltage bias conditions. When using appropriate values for the injecting contact conductivity and voltage, we find a stable self-sustained oscillation of the current through the structure which corresponds to periodic nucleation of electric field pulses at the injecting contact that then move to the receiving contact. The pulses have a large electron density on their trailing edges which implies large gradients of the electric field there. These gradients are well resolved by particles having large weights there, which is one of the advantages of using the weighted particle numerical method. Our results agree with experimental observations [13,4] and confirm the validity of the Chapman-Enskog perturbation method used to derive a drift-diffusion equation for high electric fields [3]. In fact, the electric field profile and the total current density obtained by numerically solving the the drift-diffusion equation [9] agree very well with the numerical solution of the kinetic equations obtained in the present work. Having solved the kinetic equations directly, we can obtain the evolution of the distribution function and its relevant moments such as electron density, current density and average energy. The present work paves the way to numerically solving interesting problems in nanoelectronics and spintronics that are described by related quantum kinetic equations with more than one miniband [2].
2009-07-22T10:02:20.000Z
2009-07-22T00:00:00.000
{ "year": 2009, "sha1": "45698e02d69f41d7346f2194c11bdd5eab3ed40b", "oa_license": null, "oa_url": "https://eprints.ucm.es/14904/4/0907.3807.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "45698e02d69f41d7346f2194c11bdd5eab3ed40b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Computer Science", "Mathematics" ] }
236361728
pes2o/s2orc
v3-fos-license
Finite Element Analysis of the Milling of Ti6Al4V Titanium Alloy Laser Additive Manufacturing Parts : This study aimed to analyze the defects of large residual stress in laser additive manufacturing metal parts by establishing a milling numerical simulation of Ti6Al4V titanium alloy thin-walled parts based on the Johnson-Cook constitutive model of Ti6Al4V titanium alloy, a modified Coulomb friction stress model, the physical chip separation criterion and other theories, combined with the finite element software ABAQUS. The influences of milling depth, initial temperature and milling speed on the forming quality of the formed part were analyzed. The results show that milling changes the residual stress distribution of the deposition layer, which can reduce or even change the residual tensile stress on the surface of the deposition layer produced by the additive manufacturing process into compressive stress, and the equivalent Mises stress decreases by 47% compared with the original forming surface. When the initial temperature increases from 20 ◦ C to 400 ◦ C, the maximum equivalent Mises stress of the milling surface decreases by 26%. Introduction Additive manufacturing technology directly produces parts without the need for specific tools, which can enhance the geometric freedom of the process and allow for the manufacturing of complex geometric shapes, as well as reduce time and production costs [1]. Compared with traditional manufacturing methods, additive manufacturing production costs and manufacturing cycles are reduced, and materials are well utilized. Therefore, this process has received significant attention from many industrial departments and has become a hot research topic [2][3][4][5]. This study looks at Ti6Al4V, a α + β phase titanium alloy. Due to its good mechanical and thermochemical properties, such as specific strength and corrosion resistance, as well as its low cost, it is widely used in the aerospace industry, deep-sea operations, medical equipment, and value markets. Compared with parts made by traditional forging processes, additive-manufactured parts usually have more prominent physical and mechanical properties, which increase the difficulty of machining [6]. However, additive manufacturing technology inevitably produces defects, such as dimensional errors, deformation and high residual stress and cracking [7][8][9][10], caused by layered and superimposed manufacturing, which cannot achieve the required accuracy, uniformity of material characteristics and surface quality. Thus, the metal parts of additive manufacturing are restricted in tolerance and critical applications. In order to enhance the performance of parts and debase the residual stress, metal parts made of additive materials almost always require post-processing. As a method of machining, milling can eliminate the step effect caused by the principle of layered manufacturing and ensure the accuracy of the processed parts [11]. Additive-subtractive composite manufacturing technology allows for the composite manufacturing of additive manufacturing and subtractive processing by synergistically combining the additive forming and subtractive processing processes in a single workstation. The relative advantages of each process can be used to reduce manufacturing costs. Much research has been conducted with the aim of improving production efficiency [12,13]. Du et al., reported the additive manufacturing precision milling composite manufacturing method, which can obtain better geometric accuracy and surface quality than selective laser cladding (SLM) [14]. Zhang et al., studied the effects of milling on the machining performance of selected melts and analyzed the effects of different factors on roughness and residual stress. The results show that the roughness and residual stress were significantly reduced, and the surface quality of the additive products was effectively improved [15]. Lopes et al., studied the effect of milling on the thin-walled parts produced by welding wire arc additive manufacturing. The results show that roughness is negatively correlated with cutting speed and positively related to the feed rate of each tooth. [16]. Bordin et al., proposed a finite element analysis model for turning Electron Beam Melted (EBM) Ti6Al4V alloy. To calibrate and validate the model, the modified the Johnson-Cook (J-C) constitutive equation was used, combined with the mixed adhesive sliding friction model to model the tool friction. The model was implemented by cutting force and temperature measurements obtained under dry and low-temperature lubrication conditions [17]. Imbrogno et al., established a simulation model for turning Direct Metal Laser Sintering (DMLS) Ti6Al4V alloy, and through the established user subroutines, predicted the cutting force and temperature during the machining process [18]. Bordin et al., compared the semi-finish machining of forged Ti6Al4V and additively formed titanium alloys' cutting performance under conditions. The results show that the processing difficulty of additive manufacturing alloys is greater than that of forged alloys, and the surface roughness value is higher, which causes more serious tool wear. The time to reach the tool wear standard is more than 100% higher than that of forged alloys [19]. Shunmugavel et al., studied and compared the mechanical properties and workability of additive titanium manufacturing alloy Ti6Al4V and wrought titanium manufacturing alloy Ti6Al4V. The results showed that additive titanium alloy, due to its unique acicular structure, has a strength and hardness higher than forged Ti6Al4V, but its ductility is poor [20]. Milton et al., studied the surface integrity of Ti6Al4V parts in three forming directions under finishing conditions. The results show that, compared with traditional alloys, the additively manufactured samples have higher work hardening performance, and larger residual stresses are generated during the milling process [21]. Polishetty et al., found that, because of selective laser melting, Ti6Al4V has higher yield strength and hardness. The cutting force of selective laser melting is larger than forging, and the cutting force increases with the cutting speed, which is opposite to that of forging; this may be affected by the thermal softening characteristics of Ti6Al4V titanium alloy [22]. In general, most research focuses on experimentation; there is little research on the methods, principles and numerical simulation of additive/subtractive composite manufacturing. Based on the simulation results of the additive manufacturing process, this study coupled additive manufacturing and the milling process of titanium alloy and considered the influence of initial residual stress and temperature factors on the milling performance of additive manufacturing parts. Milling materials and processes were used as specified in the Johnson-Cook (J-C) constitutive model, and the physical chip separation criteria and modified Coulomb friction model theory were used to guide the coupled model. The influence of initial temperature and milling speed on residual stress was also analyzed. The results can provide a theoretical basis and reference for the milling of titanium alloy laser fuse additive parts. J-C Constitutive Model The J-C constitutive model [23][24][25] chosen for this research links the hardening, strengthening and thermal softening effects and affects the flow stress of the cladding layer material. It can better reflect the mechanical change behavior of Ti6Al4V titanium alloy materials under the milling model; the expression is as follows [26,27]: where A is the initial yield stress; B is the strain hardening parameter; C represents the strain rate hardening parameter; n is the hardening index; m is the thermal softening index; . ε is the equivalent plastic strain rate; . ε 0 indicates the reference strain rate, the value used in this study is 0.001/s; T r = 20 • C; T m = 1668 • C. A, B, C, m and n are undetermined material parameters, which need to be determined according to the experimental curves of stress-strain relationship under different strain rates and different temperatures. The process of determining the parameters of the J-C model is generally as follows. First, the complete stress-strain curve of the material is obtained through the quasistatic compression experiment, and the first parameter of the J-C model is determined by data fitting number A, B, n: Then, the current temperature is taken as the reference temperature (room temperature), namely T = T r , and it is brought into Equation (1): Then, The stress-strain curve of titanium alloy is obtained through the Hopkinson bar experiment at room temperature and different strain rate, and the parameter C is obtained by the data fitting method. In the same way, under the condition of the determined first two parameters of the J-C mode, the movable terms of Equation (1) are combined as follows: The stress-strain curve of titanium alloy can be obtained through the Hopkinson bar experiment under different temperature and constant strain rate conditions, and the coefficient m is obtained by the data fitting method. The specific parameters of Ti6Al4V titanium alloy in this paper are obtained in accordance with reference [26] and are shown in Table 1. In addition, the material properties of Ti6Al4V alloy are listed in Table 2. The geometric structure design of the tool includes the basic dimensions of the tool and the basic sectional design. The basic sectional design of the tool mainly includes the parameters of the tool's rake angle, rear angle, helix angle, and so on. The geometric structure parameters of the tool are related to the deformation and interaction between the chip itself, the chip and the tool, and the workpiece matched surface and the tool, which plays a major role in cutting performance and cutting effect of the tool. According to the cutting characteristics of Ti6Al4V titanium alloy material, integral carbide milling cutter YG8 [28] was selected as the tool, and the specific physical parameters were set as follows: the number of the blade was 4, the front angle was 10 • , the back angle was 12 • , and the helix angle was 30 • . The surface coating of milling cutter used an AlCrN coating hardness of 89HRA, bending strength of 1.5 GPa, compressive strength of 4.5 GPa, density of 14.4 × 10 3 -14.6 × 10 3 kg/m 3 and impact toughness of 2.5 J/cm 2 . Other tool material performance parameters are shown in Table 3. Modified Coulomb Friction Stress Model The milling process of the cladding layer is a complex three-dimensional elastoplastic deformation process. There is a strong interaction between the milling cutter and the cladding layer. Therefore, the friction stress model selected in the finite element simulation calculation is very important for the accuracy of the calculation results. This study adopts the modified Coulomb friction stress model, which is expressed as follows [29]: where τ f is the friction stress; µ is the friction coefficient, which is taken as 0.5 in the calculation in this study; τ s is the ultimate shear stress of the material; σ n is the normal stress on the contact surface. The improved Coulomb friction stress model can automatically determine the friction state based on the contact stress in the milling process. Chip Separation Guidelines The milling process is often accompanied by the continuous removal of material, and the milling cutter continuously cuts the material from the workpiece. The cutting separation criterion is used to judge whether the chips can be separated from the cladding layer. In the ABAQUS finite element simulation calculation, there are mainly geometric and physical separation criteria. The definition of a geometric separation criterion is simple, easy to judge, and stable [30]. When creating the cladding layer milling model, the separation line between the chip and the cladding layer must be established artificially, but the selection of the separation value requires certain engineering practice experience, so the calculation results of geometric separation criteria generally have larger errors and lower accuracy. The physical separation criterion does not need to artificially establish a separation line when modeling, and the calculation results of the cutting simulation are more real and reasonable and more in line with the actual machining process; thus, the physical separation criterion is selected in the simulation calculation. For the Ti6Al4V titanium alloy material milling model, the J-C material damage model was selected, which considers the influences of the stress, strain rate, and milling cladding temperature of Ti6Al4V titanium alloy wire on material damage. The specific expression is as follows [31]: where ∆ε p is the equivalent plastic strain coefficient increment; ε p f is the equivalent strain when the material fails; D 1 , D 2 and D 3 are strain related parameters respectively; D 4 and D 5 are strain rate related parameters and thermal softening parameters respectively; . ε 0 is the reference strain rate, which is taken as 1/s in the calculation in this study; σ m and ε are the average normal stress and the equivalent stress, respectively. Related parameter settings are shown in Table 4. Table 4. Ti6Al4V titanium alloy material damage failure parameters. Basic Settings of the Finite Element The size of the substrate in the model is set to 15 mm × 9 mm × 4 mm, and the size of the deposition layer is 11 mm × 1 mm × 2.2 mm. The deposition layer and the substrate use the C3D8RT unit. In the model, the part of the deposition layer involved in cutting is divided into finer meshes. The milling cutter adopts a rigid body setting and refines part of the mesh involved in milling; the diameter of the milling cutter is 5 mm, and the C3D4T unit is used, as shown in Figure 1. Ti6Al4V is a common titanium alloy wire material. The cutting simulation boundary conditions include the following: the substrate is completely fixed at all four corners, the motion of the milling cutter is set to rotate clockwise along the center line of the milling cutter, and the feed movement is carried out along the workpiece. The milling method is dry milling. According to the actual physical process of milling, the following assumptions are made without affecting the accuracy of the simulation results: (1) Ti6Al4V titanium alloy material is isotropic; (2) set the tool to a rigid body, only consider the heat conduction of the tool, and ignore the deformation and friction loss of the tool; (3) during the milling process, the vibration of the tool and workpiece caused by environmental factors is not considered. Figure 2 shows the evolution of the stress field in which laser fuses are deposited and are thin-walled; Figure 2a-e are cloud maps of the stress field distribution of the cladding layer at the end of the layer 1 to layer 5 scan. From these pictures, when the number of forming layers is low, it is due to the low substrate temperature. After each layer of sedimentation, there is a large residual stress, and as the number of sediments increases, the substrate is fully warmed up and the residual stress generated by the front layer is released when the back layer is deposited. Thus, with an increase of cladding layers, the stress decreases and the distribution becomes more uniform. Figure 3 shows the cloud maps of stress field distribution at different times during milling; 0 s in the figure is the stress field cloud map after the additive manufacturing cools for 900 s. As seen in Figure 3a, there are significant differences in stress states in different regions; the residual stresses in the central region are relatively stable and basically consistent, and the stress fluctuations are large in the two regions of the start and end points of the laser heat source. The maximum residual stress is concentrated at the intersection of the cladding layer and the substrate. These results are due mainly to the thermal end effect [22]. As seen in Figure 3b-f, as milling progresses, the Von Mises values of the thinwalled pieces are significantly reduced, and at the same time, the distribution of residual stress in thin-walled parts is changed. At the initial moment, the sedimentary layer as a whole is in a pull stress state; as the cutting process progresses, and with the gradual removal of sedimentary surface materials, the pull stress that remains on the surface of the sediment layer caused by the additive manufacturing process decreases or even becomes pressure stress. For studying the distribution of residual stress better, this study selects five paths from the bottom up at equal intervals in the cutting stability area of the machining surface and selects 20 nodes at equal intervals in each path. Residual stress values are collected for each node based on the path selected, and the collected residual stresses are calculated and averaged to obtain the residual stress for each path. Residual Stress Analysis As shown in Figure 4, with the other parameter conditions unchanged, the milling speed v c is set to 140 m/min and the radial cutting depth a e is set to 0.3mm; then, the difference between the residual stress of the milling surface and the original forming surface is determined. It can be found that the maximum pull stress of the original face reaches 396 MPa, with the minimum being 234 MPa. After milling, the maximum pull stress is 205 MPa, and the minimum is 145 MPa; compared with the original surface residual pull stress, this represents a decrease of about 47% on average, showing a great reduction. At the same time, it can be found that after milling, the standard deviation of residual stresses throughout the surface is reduced; this can be explained by the fact that residual stress distribution is more uniform after milling and its anisotropy improves considerably. Cutting temperature is an important variable to be monitored during milling. Figure 5 shows the comparison of the peak temperature of the tool tip during the processing of forged parts and additive manufacturing Ti6Al4V titanium alloy at different milling speeds. As can be seen from the figure, at different milling speeds, the cutting peak temperature of additive manufacturing Ti6Al4V titanium alloy is higher than that of forging Ti6Al4V parts, and when the milling speed increases, the milling temperature increases accordingly. Under the same processing conditions, compared with the forging, the cutting peak temperature of the additive manufacturing Ti6Al4V titanium alloy milling model is increased by about 26.2%. Figure 6a presents a comparison of Mises stress; it shows that the maximum pull stress of the original face is 325 MPa after milling and the maximum tensile stress is 176 MPa. Where the milling depth is 0.1 mm, the residual stress drop is the largest, reduced by approximately 54%. By observing the influence of milling depth on surface residual stress, it can be found that the surface residual tensile stress increases as the depth of milling increases. As can be seen from Figure 6b-d, milling has a significant impact on surface X direction and Y direction stresses, with the largest decreases being 77% and 131%, respectively. The effect on Z direction stress is weak, with the biggest drop being about 47%. At the same time, the X direction, Y direction and Z direction stresses of the milling surfaces as a whole show the trend of gradually decreasing the pull stress as the milling depth increases towards the pressure stress. The initial temperature has a significant effect on the milling performance of titanium alloys. In the composite manufacturing process of adding and subtracting materials, the additive and subtractive processing alternate with each other; this results in the subtractive processing being performed in a high-temperature environment. Therefore, laser melting is studied. Wire additive manufacturing Ti6Al4V titanium alloy deposits have important guiding significance for the milling performance under hot, dry milling conditions. Figure 7 shows the residual stress under different initial temperatures and milling speeds. Figure 7a shows the equivalent Mises stress comparison of the milling surface. It can be found that when the initial temperature increases, the milling surface residual stress decreases gradually. As the initial temperature increases from room temperature to 400 • C, the maximum value of the milling surface average equivalent Mises stress drops from 122 MPa to 90 MPa, an average drop of about 26%. In Figure 7b, the residual stress in the X direction of the milling surface gradually decreases with an increase in the initial temperature. When the initial temperature increases from room temperature to 400 • C, the stress in the X direction of the milling surface decreases by 136.5% on average and the X direction residual tensile stress is greatly reduced, which can effectively reduce the risk of cracks and other defects in the additive part perpendicular to the X direction. At the same time, as seen in Figure 7b, the milling speed has a significant influence on the directional stress. As the milling speed increases, the X direction stress gradually decreases or even becomes a compressive stress. As seen in Figure 7c, the initial temperature and milling speed have little effect on the Y direction stress of the milling surface. Figure 7d shows the comparison of the Z direction stress of the milling surface. It can be seen from the figure that the initial temperature has little effect on the Z direction stress of the milling surface. When the initial temperature increases, the compressive stress increases accordingly. The milling speed has a remarkable effect on the Z direction stress. As the milling speed increases, the Z direction compressive stress of the milling surface gradually increases. It can be seen from the above that appropriately increasing the initial preheating temperature and increasing the milling speed are beneficial in reducing the residual stress and improving the surface quality of the workpiece. Model Validity Verification To prove the validity of the model, it needs to be verified with the experimental results. Figure 8 displays the comparison between simulation data and experiment involving milling force variation [32]. From the change curve, it can be found that the milling force change trend during the simulation process and the results are basically consistent with the experimental data, which indirectly proves the validity of the model. Conclusions This study established a laser-fuse-increase material manufacturing 3D finite element model for milling titanium alloy using titanium alloy material manufacturing and milling process coupling. The material parts were considered with regard to the initial residual stress and temperature factors affecting the performance of milling, making the simulation more realistic. The cutting depth, milling speed, and other factors were analyzed to determine material manufacturing titanium alloy milling performance, the following conclusions were drawn: (1) Milling processes change the distribution state of residual stress in the cladding layer, which can reduce the tensile stress generated by additive manufacturing processes and even change it into compressive stress. Meanwhile, the residual stress distribution on the surface of the milling layer is more uniform, and the anisotropy is greatly improved. The equivalent Mises stress is reduced by 47% on average compared with that of the original forming surface. (2) At different milling speeds, the milling peak temperature of additive manufacturing parts is higher than that of forging parts, and the milling temperature gradually increases when the milling speed increases. Under the same processing conditions, the milling peak temperature of the additive manufacturing Ti6Al4V titanium alloy milling model increases by about 26.2%. (3) The residual stress on the milling surface decreases with an increase in the initial temperature. When the initial temperature increases from room temperature to 400 • C, the maximum Mises stress of the milling surface decreases by 26% on average, the X direction stress decreases by 136.5% on average, and the residual tensile stress decreases significantly.
2021-07-27T00:05:47.563Z
2021-05-24T00:00:00.000
{ "year": 2021, "sha1": "3950fa66303b14471eeee0535014e4fecb03f7bb", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/11/11/4813/pdf?version=1621865374", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "027a4a2c2f5bc60e629ad78dea9f4c486dee4748", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
218900815
pes2o/s2orc
v3-fos-license
Evolutionary NAS with Gene Expression Programming of Cellular Encoding The renaissance of neural architecture search (NAS) has seen classical methods such as genetic algorithms (GA) and genetic programming (GP) being exploited for convolutional neural network (CNN) architectures. While recent work have achieved promising performance on visual perception tasks, the direct encoding scheme of both GA and GP has functional complexity deficiency and does not scale well on large architectures like CNN. To address this, we present a new generative encoding scheme -- $symbolic\ linear\ generative\ encoding$ (SLGE) -- simple, yet powerful scheme which embeds local graph transformations in chromosomes of linear fixed-length string to develop CNN architectures of variant shapes and sizes via evolutionary process of gene expression programming. In experiments, the effectiveness of SLGE is shown in discovering architectures that improve the performance of the state-of-the-art handcrafted CNN architectures on CIFAR-10 and CIFAR-100 image classification tasks; and achieves a competitive classification error rate with the existing NAS methods using less GPU resources. Introduction Historically, evolutionary neural architecture search (NAS) research has been of interest in the AI community for over two decades [Yao, 1999]. Recent growing interest in automation of deep learning has seen a phenomenal development of new NAS methods based on reinforcement learning (RL) and others in the last three years [Elsken et al., 2019]. However, classical NAS methods such as Genetic Algorithms (GA) [Xie and Yuille, 2017;Sun et al., 2019] and Genetic Programming (GP) [Suganuma et al., 2018] has also been exploited to develop convolutional neural networks (CNN) for visual perception tasks. Besides achieving promising results, classical NAS methods also consume less computational resource [Sun et al., 2019] than their RL-based competitors . But, the direct encoding scheme of both GA and GP has two inherent limitations: (1) chromosomes of fixed-length and (2) genotype and phenotype spaces not distinctive separated, which limit their functional complexity [Ferreira, 2006]. For example, chromosomes of fixed-length scheme employed in Genetic CNN [Xie and Yuille, 2017] does not scale well on large architectures. To address these issues, AE-CNN [Sun et al., 2019] introduced variable-length scheme in GA to encode CNN architectures of variant shapes and sizes similar to nonlinear structures in GP. Though, the variable-length scheme exhibits some certain amount of functional complexity, it is not without difficulty to reproduce with crossover operations. The reason for the crossover limitation in AE-CNN and its alike CGP- CNN [Suganuma et al., 2018] is because their genotype and phenotype spaces are not explicitly separated, thus, genetic modification is directly subject to phenotype structural constraint. DENSER [Assunção et al., 2019] combines GA with Grammatical Evolution (GE) to separate genotype and phenotype spaces in analogous to nature. However, GE lacks modularity and it is not flexible to modify the grammar with modules [Swafford et al., 2011]. To this end, we argue that the development of evolutionary NAS methods that separate genotype and phenotype spaces to generate CNN architectures for visual perception tasks is still in its infancy. In this work, we introduce a new generative encoding scheme, symbolic linear generative encoding, which embeds local graph transformations of Cellular Encoding [Gruau, 1994] in simple linear fixed-length chromosomes of Gene Expression Programming [Ferreira, 2006] to develop CNN architectures of variant shapes and sizes. Moreover, to enable the evolutionary process to discover new motifs for the architectures, regular convolutions are employed as the basic search units as opposed to sophisticated convolutions such as depthwise separable and asymmetric convolutions in AmoebaNet [Real et al., 2019], and sophisticated blocks like ResNet and DenseNet in AE-CNN [Sun et al., 2019]. In experiments, the preliminary results show the effectiveness of the proposed method by discovering CNN architecture that obtains 3.74% error on CIFAR-10 image dataset benchmark and 22.95% error when transfer to CIFAR-100. The results is competitive with the current auto-generated CNN architectures and an improvement to the performance of the state-of-the-art handcrafted ones. The remainder of the paper is organised as follows. Section 2 discusses the related work and Section 3 presents the details of the proposed method. The experimental results are reported in Section 4 and Section 5 concludes this study with future direction. Evolutionary NAS Most earlier work in evolutionary NAS evolve both the network architecture and its connection weights at a small scale [Yao, 1999]. Recent neural networks such as CNN have been scaled up with millions of connection weights to improve performance on a given task. And learning the connection weights of these large-scale networks via back-propagation method outperform evolutionary approach. Thus, recent evolutionary NAS work [Elsken et al., 2019] have focused on evolving only the network architectures and using back-propagation method to optimize the connection weights. But, since the best architecture shape and size for a given data is not known, chromosomes of variable-length using direct encoding scheme have been employed for the architectures to adapt their shapes and sizes on a given task [Real et al., 2017;Sun et al., 2019]. CGP-CNN [Suganuma et al., 2018] used Cartesian GP to represent CNN architectures of variable-length structures. To depart from the functional complexity deficiency of direct encoding, DENSER [Assunção et al., 2019] combines GA with GE to adopt a genotype and phenotype spaces distinction and explicitly use grammars to generate phenotypes of CNN architectures. Generally, the architecture search space is classified into two categories: the global search space which defines the entire architecture (macroarchitecture) [Real et al., 2017;Sun et al., 2019], and cell-based search space for discovering a microarchitecture (cell) which can be stacked repeatedly to build the entire architecture Real et al., 2019]. The cell is a directed acyclic graphs (DAGs) which is used as building blocks to form the architecture. CNN architectures discovered by cellbased approach are flexible and transferable to other tasks , and they perform better than the global ones [Pham et al., 2018]. The popular cell-based approach is NASNet which involves two types of cells: the normal cell and reduction cell (used to reduce feature resolutions). Zhong et al. [2018] and Liu et al. [2018] proposed similar cell-based approach but used max-pooling and separable convolution layers respectively to reduce the feature resolutions. The basic search units in both search spaces are mostly sophisticated convolutions such as depthwise separable and asymmetric convolutions in Amoe-baNet [Real et al., 2019] or sophisticated blocks such as ResNet and DenseNet in AE-CNN [Sun et al., 2019]. These search units reduce the search space complexity, however, they may detriment the flexibility and restrict the discovering of new architecture building blocks that can improve the current handcrafted ones. Thus, in this work, we use regular convolutions (Section 3.1). Gene Expression Programming Gene Expression Programming (GEP) is a full-fledged genotype-phenotype evolutionary method based on both GA and GP. Chromosomes consist of linear fixed-length genes similar to the ones used in GA, and are developed in phenotype space as expression-trees of different shapes and sizes similar to the parse-trees in GP. The genes are structurally organized in a head and a tail format called Karva notation [Ferreira, 2006]. The separation of genotype and phenotype spaces with distinct functions allows GEP to perform with high effectiveness and efficiency that surpasses GA and GP [Zhong et al., 2017]. Ferreira [2006] has proposed that the chromosomes in GEP can completely encode ANN to discover an architecture via evolutionary process. Thus far, GEP is yet to be adopted for complex architectures like CNN. The Achilles' heel of GEP is its Karva notation which does not allow hierarchical composition of candidate solutions, which means an evolved good motifs are mostly destroyed by genetic modifications in subsequent generations [Li et al., 2005]. Thus, to adopt GEP for CNN architecture search, we propose a new generative scheme that naturally embeds motifs in Karva expression of GEP as individual chromosomes and consequently a new genotype-phenotype mapping is encapsulated in GEP convention of evolutionary process. Cellular Encoding Cellular Encoding (CE) is a generative encoding based on simple local graph transformations that control the division of nodes which evolve into ANN of different shapes and sizes. The graph transformations are represented by program symbols with unique names; and these symbols depict a grammartree (program) which encapsulates the developing procedure of an ANN via evolutionary process from a single initial unit 1 that has both input and output nodes. CE has shown its efficiency on a wide range of problems such as evolving ANN for controlling two poles on a cart and locomotion of a 6-legged robot, a review can be found in . In this work, we adopt four program symbols of CE and embed them in the fixed-length individual chromosomes of GEP, since their transformations-particularly CP I and CP O [Gruau and Quatramaran, 1996]-generate motifs similar to ResNet block which is prominent in deep neural networks. We briefly explain the four program symbols we adopt and refer Gruau [1994] and Gruau and Quatramaran [1996] for in-depth details. -SEQuential division (SEQ): it splits current node into two and connects them in serial; the child node inherits the outputs of the parent node. We formulate the CNN architecture search problem as follows. Given the problem space Ψ = {A, S, P, tD, vD}, where A is the architecture search space, S represents the search strategy, P denotes the performance measure, and tD and vD are the training and validation datasets respectively, the objective is to find a smaller CNN architecture a * ∈ A via the search strategy S, then after training it on the dataset tD, it maximizes some performance P (in this case classification accuracy P acc ) on the validation dataset vD. The smaller architecture here means a model with less number of parameters θ. Mathematically, the objective function F can be formulated as: where L represents the training of the model parameters θ with the loss function and T params denotes the target number of parameters. In this section, we describe the architecture search space and search strategy that we proposed. Search Space The basic search units in the search space consists of regular convolutions with batch-norm and ReLU and CE program symbols. The regular convolutions units may enable the evolutionary process to find new motifs to form CNN architectures rather than the predefined ones (depthwise separable, asymmetric convolution, ResNet and DenseNet). Since predefined units reduce the complexity of the architecture search space, they may detriment the flexibility of discovering new motifs that can improve the current handcrafted ones. Thus, we adopt cell-based search approach in which each node in the discovered cells is associated with a regular convolution in the search space whereas edges are representation of latent information flow direction. Following Zhong et al. [2018], we use max-pooling to reduce feature maps resolution and 1×1 convolution is applied when necessary to downsample the input depth. The regular convolutions include in the search space are: 1×1, 1×3, 3×1, and 3×3. Each operation has a stride of one and appropriate pad is applied to preserve the spatial resolution of the feature maps. And the CE program symbols are: SEQ, CP I, CP O and EN D. The complexity of the search space can be expressed as: (#porgram symbols) h × (#convolution operations) h+1 × n possible architectures, where h is the number of CE program symbols that forms the head of a gene in a chromosome and n is the number of genes in a chromosome. For example, a chromosomes with h=2 and n=3, the search space contains 3072 possible architectures. Symbolic Linear Generative Encoding Encoding network architectures into genotypes using a particular encoding scheme is the first stage of evolutionary NAS task. We proposed a new generative encoding scheme, symbolic linear generative encoding (SLGE), which embeds local graph transformations of CE in simple linear fixed-length chromosomes of GEP to develop CNN architectures of different shapes and sizes. Chromosomes in SLGE can evolve different motifs via the evolutionary process of GEP to build CNN architectures. SLGE explicitly separates genotype and phenotype spaces in analogue to nature, to benefit from all the advantages of evolutionary process without functional complexity deficiency. It is worth emphasizing how the implementation of SLGE is simple, because of its linear fixedlength structure. We implement SLGE on the top of geppy 2 , a GEP library based on DEAP 3 framework. Representation Genetic representation defines the encoding of a phenotype into a genotype. A simple, but effective and efficient representation significantly affects the overall performance of evolutionary process. In SLGE, the chromosomes are structured similar to the ones in GEP. Chromosomes are made up of genes of equal fixed-length string. Each gene composes of a head which consists of CE program symbols, and a tail of regular convolutions. Given the length of a gene head h, the length of the gene tail t is a function of h expressed as: t = h + 1, thus, the length of a gene is 2h + 1. Figure 1 is an example of a typical SLGE chromosome of two genes and its phenotype (cell) is presented in Figure 2. Mapping and Fitness Function The mapping algorithm translates genotypes (chromosomes) into phenotypes (cells)-each one is stacked repeatedly to build candidate CNN architecture-then the fitness function is used to train each architecture for a few epochs and evaluates to determine its fitness quality in the phenotype space, which is then maps back into genotype space where genetic variations occur to produce offspring for the next generation. The representation can be mathematically expressed as follows: where F g is the mapping algorithm which maps a set of genotypes ϕ g ⊂ Φ g into phenotype space Φ p to form individual architectures and F p represents the fitness function that trains each individual architecture (phenotype) ϕ p ∈ Φ p for a few epochs to determine its fitness value in the fitness space R. Algorithm 1 and 2 are the mapping algorithm F g that is used to translate chromosomes into cells in SLGE. The algorithm takes a chromosome of linear fixed-length string as an input and produces a DAG which represents a candidate cell. For example, given the chromosome in Figure 1 as an input, Algorithm 1 and 2 will produce the cell in Figure 2 as an output. The fitness function F p is a surrogate to the objective function F in equation (1) which aims to find an individual architecture that maximize classification accuracy on validation dataset, subject to model size constraint. However, since we search for cells in the architecture search space during the evolutionary process, the model size constraint of individuals is not considered in their fitness evaluation. Rather, we only apply the model size constraint for the entire architecture. Thus, the function is simply the objective function F Figure 1: The structural representation of a typical SLGE chromosome. The chromosome has two genes of equal fixed-length, and each gene has a head of three CE program symbols and a tail of four regular convolution operations. This chromosome is the genotype of the cell (Figure 2) used to build the best evolutionary-discovered network in Table 2. Conv3x3 Figure 2: The representative cell of the chromosome in Figure 1, and it is the cell used to build the top network in Table 2. (In general, the convolution nodes without successor in the cell are depthwise concatenated to provide an output, and if a node has more than one predecessor, the predecessors parameters are add together. This is a common approach in cell-based search .) in equation (1) without the size constraint. The loss function is used to train each individual via back-propagation on the training and validation sets, and the fitness function determines its fitness value which represents the probability of the individual survival. The higher the fitness value, the more likely the individual will have progeny and survive into the next generation. Evolutionary Process We adopt the evolutionary process in GEP and refer Ferreira [2006] for details of the steps presented here. Step 1: Initialization-randomly generate a population of SLGE chromosomes in a uniform distribution manner. Step 2: Mapping-apply the mapping function F g to translate individual chromosomes into cells. Step 3: Fitness-build candidate CNN architectures with each cell, and train each via back-propagation and evaluate its fitness using the fitness function F p . Step 4: Selection-select individuals to form next generation population via roulette-wheel strategy with elitism. Step 5: Mutation-randomly mutate all elements in a chromosome. The structural rule must be preserved (e.g. convolution element cannot be assigned to a gene head). Step 6: Inversion-randomly invert some sequence of elements in a gene head of individual chromosomes. Step 7: Transposition-randomly replace some sequence of elements with consecutive elements in the same chromosome. The structural rule must be preserved. Step 8: Recombination-crossover gene elements of two chromosomes using two-point and gene crossovers. Step 9: Go to Step 2 if max generation not met; else return individual with the highest fitness as best discovered cell. Algorithm 1 The genotype-phenotype mapping function F g Input: Chromosome of linear fixed-length string ϕ g ∈ Φ g Output: A directed acyclic graph DAG (cell) 1: n ← len(ϕ g ) //Number of genes in chromosome ϕ g 2: DAG.init(null) //Initialize cell with null node which //has input and output nodes 3: for i ← 1 to n do 4: ϕ //Get gene i in chromosome ϕ g 5: Create queue Q of all the convolutions in gene ϕ Experiments The preliminary experiments aimed at verifying the effectiveness of the proposed method to discover cells for CNN architectures that perform well on image classification tasks. We ran eight experiments, two each of four different configurations of chromosome (Table 2) and a random search as baseline. The search was conducted on CIFAR-10 [Krizhevsky et al., 2009] dataset, and the best discovered architecture was transferred to CIFAR-100 [Krizhevsky et al., 2009]. The experiments were performed on one 11GB GPU GeForce GTX 1080 Ti machine for 20 days. We implemented all the architectures with Py-Torch 4 and trained using fastai 5 library. The codes are available at https://github.com/cliffbb/geppy nn. Algorithm 2 The CE local graph transformation procedure. It transforms gene subgraph G with parent node pnode to generate child node cnode via CE program ps. Evolutionary Settings We used a small population of 20 individuals since GEP is capable of solving relatively complex problems with small population [Ferreira, 2002], and generation of 20 to reduce the computation cost of the search process. Other evolutionary parameters settings used for each run are summarized in Table 1, and these are defaults settings in GEP. Training Details Each network begins with 3×3 convolution stem with output channel size C, followed by three blocks denoted as B=[b 1 , b 2 , b 3 ]-each block i consist of repeated cell of b i times with max-pooling layer inserted between them to downsample-then a classifier. The max-pooling reduces the feature maps of each block by halve and doubles the channels. During search, the CIFAR-10 50k training set was split into 40k training and 10k validation subsets with normalization. We used relatively small network with C=16 and B=[1, 1, 1]. Each candidate network was trained on the training subset and evaluated on the validation subset to determine its fitness value using 1-cycle policy in Smith [2018] with Adam optimizer. The learning rate was set to go from 0.004 to 0.1 linearly while the momentum goes from 0.95 to 0.85 linearly in phase one, then in phase two, the learning rates follows cosine annealing from 0.1 to 0, whereas the momentum goes from 0.85 to 0.95 with the same annealing. The weight decay was set to 0.0004, batch size to 128 and training epochs to 25. After the search, we set C=40 and each evolutionarydiscovered cell was repeatedly stacked to build large CNN architectures subject to model size constraint T params ≤3.5M. We trained each CNN architecture from scratch on the training and validation subsets for 800 epochs, and reported the classification error on the CIFAR-10 10k test dataset. The learning rate remained as during search for the first 350 epochs, and was reset to go from 0.00012 to 0.003 in phase one and 0.003 to 0 in phase two of 1-cycle policy. We augmented the training subset as in He et al. [2016a], and the other hyperparameters remained the same as during search. Search on CIFAR-10 To investigate the effectiveness of SLGE, we conducted evolutionary search on four different configurations of chromosome-2 genes with head of 2 and 3, and 3 genes with head of 2 and 3. Each was run twice on CIFAR-10 dataset. Chromosome of 2 genes with head of 3 discovered the best architecture which obtains 3.74% classification error ( Table 2). The result is compared with other search methods in Table 3. Compared with handcrafted networks:-SLGE networks improve the performance of most popular handcrafted ones. SLGE achieves classification error of 0.8% and 1.4% lower than ResNet-1001 and FractalNet respectively with few parameters. However, the DenseNet-BC (k=40) performs better than SLGE networks using more parameters. and competes with Block-QNN-S (used more parameters) and NASNet-A (consumed more GPU resources). Compared with reinforcement NAS networks:- Compared with evolutionary NAS networks:-SLGE performs significantly well than Genetic CNN and Large-scale Evolution, but falls slightly behind the state-of-the-art Hierarchical Evolution and AmeobaNet-A networks. SLGE consumes 1 /15 and 1 /157 GPU computing days of that consumed by Hierarchical Evolution and AmeobaNet-A respectively. Random Search To evaluate the effectiveness of the SLGE representation scheme, a simple random search was performed as the baseline. We randomly generated a population of ten individual networks from the top chromosome (Table 2); trained all from scratch with no evolutionary modifications on CIFAR-10 using the same training settings with 500 epochs, and the network with the lowest classification error is reported (Table 3). We achieve the mean classification error of 5.85% with standard deviation 1.77% over the ten individual networks. This is a considerable competitive result which demonstrates the effectiveness of our proposed SLGE scheme with regular convolution search units. Evaluation on CIFAR-100 The top architecture learned on CIFAR-10 task was experimented on CIFAR-100 to evaluate the transferability of the evolutionary-discovered cell. The CIFAR-100 has similar features as CIFAR-10, but with 100 classes which makes it rather challenging classification task. We simply transferred the best architecture from CIFAR-10 (Table 2) but trained from scratch with a classifier head of 100 classes, all training settings remained the same. And we achieve the classification error of 4.1% improvement to MetaQNN and 6.1% to Genetic CNN, and compete with other networks (Table 3). Discussion We make a few observations from the preliminary experimental results. The networks developed with chromosomes of 2 genes performed better than the ones with 3 genes, something we least expected. And the evolutionary-discovered cell (Figure 2) of the top network has topological structure similar to DenseNet with a well-formed asymmetric convolution. This underscore the effectiveness of the proposed method and further experiments will be conducted in future to ascertain its robustness. In general the preliminary results on CIFAR-10 and CIFAR-100 classification tasks are very promising, although no evolutionary parameter was tuned. Thus, the natural future direction is to tune the evolutionary parameters, and extend the task to the most challenging ImageNet classification task to verify the generality of the proposed method. Conclusion We have presented an effective evolutionary method which discovers high-performing cells as building blocks for CNN architectures based on a representation scheme which embeds elements of CE in chromosomes of GEP. We show that our SLGE representation scheme coupled with regular convolution units can achieve significant results even with a simple random search strategy. Our top network obtains a highly competitive performance with the state-of-the-art networks on both CIFAR-10 and CIFAR-100 datasets. In addition to the observations made in Section 4.6, we will continue improving the proposed method and extend it to other visual tasks such as semantic segmentation and object detection.
2020-05-28T01:01:25.175Z
2020-05-27T00:00:00.000
{ "year": 2020, "sha1": "26aac79b159c97f84bb4371b3fcfe0086876cf84", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2005.13110", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "26aac79b159c97f84bb4371b3fcfe0086876cf84", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
270169286
pes2o/s2orc
v3-fos-license
Unveiling the dynamics of gut microbial interactions: a review of dietary impact and precision nutrition in gastrointestinal health The human microbiome, a dynamic ecosystem within the gastrointestinal tract, plays a pivotal role in shaping overall health. This review delves into six interconnected sections, unraveling the intricate relationship between diet, gut microbiota, and their profound impact on human health. The dance of nutrients in the gut orchestrates a complex symphony, influencing digestive processes and susceptibility to gastrointestinal disorders. Emphasizing the bidirectional communication between the gut and the brain, the Brain-Gut Axis section highlights the crucial role of dietary choices in physical, mental, and emotional well-being. Autoimmune diseases, particularly those manifesting in the gastrointestinal tract, reveal the delicate balance disrupted by gut microbiome imbalances. Strategies for reconciling gut microbes through diets, precision nutrition, and clinical indications showcase promising avenues for managing gastrointestinal distress and revolutionizing healthcare. From the Low-FODMAP diet to neuro-gut interventions, these strategies provide a holistic understanding of the gut’s dynamic world. Precision nutrition, as a groundbreaking discipline, holds transformative potential by tailoring dietary recommendations to individual gut microbiota compositions, reshaping the landscape of gastrointestinal health. Recent advancements in clinical indications, including exact probiotics, fecal microbiota transplantation, and neuro-gut interventions, signify a new era where the gut microbiome actively participates in therapeutic strategies. As the microbiome takes center stage in healthcare, a paradigm shift toward personalized and effective treatments for gastrointestinal disorders emerges, reflecting the symbiotic relationship between the human body and its microbial companions. Introduction The human body often likened to a complex ecosystem, vividly exemplifies this analogy in the form of the gut microbiome (1).Within the intricate landscape of the human digestive system, a bustling community of microorganisms, encompassing bacteria, viruses, fungi, and more, collectively orchestrates a symphony that profoundly influences human health and well-being (2, 3).In the context of article, "symphony" likely refers to the intricate and coordinated interactions within the gastrointestinal tract involving diet, gut microbiota, and their impact on health.The gut microbiome, a dynamic and diverse population, engages in a multifaceted relationship with its human host, contributing to various physiological processes and serving as a pivotal player in maintaining equilibrium (4).This microbial ecosystem within the digestive tract is not a static entity but rather a living, evolving ecology shaped by an interplay of factors such as genetics, diet, environment, lifestyle, and even the mode of delivery at birth (5,6).The makeup of this intricate microbiome is composed of thousands upon thousands of microbial species delicately existing in a precarious equilibrium.The gut, acting as the primary residence for this diverse microbial community, houses an array of microorganisms, predominantly bacteria, alongside viruses, archaea, and eukaryotic species (1).Far from a passive bystander, the gut microbiota actively participates in key physiological processes that impact human health.Its role extends to the intricate breakdown of complex carbohydrates, proteins, and fats that might challenge the body's enzymes (7).The trillions of bacteria populating the digestive tract play a vital role in breaking down these molecules into absorbable forms, facilitating nutrient absorption (8).Moreover, the gut microbiota exerts influence over metabolic processes, affecting energy storage, nutrient processing, and appetite regulation.A symbiotic relationship is evident in the microbiota's contribution to immune system modulation.By conditioning the immune system to respond effectively to harmful pathogens while curbing unnecessary inflammation, the gut microbiota acts as a crucial ally in maintaining immune balance (9).Additionally, certain microbial inhabitants are involved in the synthesis of essential vitamins B and K, as well as short-chain fatty acids (SCFAs) renowned for their anti-inflammatory properties (10,11).Preserving the delicate equilibrium of the gut microbiome, termed dysbiosis-(disruption in the gut microbiota composition when disrupted), is paramount for overall health.Dysbiosis has been linked to various diseases and conditions, including irritable bowel syndrome (IBS), inflammatory bowel disease (IBD), obesity, and certain neurological disorders (12).Therefore, maintaining a diverse and stable microbial community is integral to promoting a healthy gut microbiome.Dietary choices emerge as a powerful tool in shaping the gut microbiome.Consuming meals rich in dietary fiber fosters an environment conducive to the thriving of beneficial bacteria.Prebiotic foods, encompassing fibers that serve as sustenance for beneficial bacteria, contribute to microbial diversity and further support gut health.In essence, the gut microbiome stands as a complex ecosystem with far-reaching effects on human health (13).From its pivotal role in digestion to its contribution to immune function, understanding the profound symbiotic link between humans and their microbial inhabitants underscores the significance of the gut microbiome.Elevating awareness of its importance and making informed dietary choices to promote diversity within this microbial community hold the potential to enhance health outcomes and deepen our comprehension of this intricate relationship.This burgeoning field of microbiome research is poised to transform our approach to human health, paving the way for innovative therapeutic interventions and personalized treatments targeting the gut microbiome.As we delve deeper into the intricacies of microbiome dynamics in human diseases, the potential for groundbreaking discoveries and therapeutic breakthroughs becomes increasingly apparent. Navigating the nutrient landscape: impact on gut microbiota The intricate relationship between diet and gut microbiota has emerged as a pivotal determinant in the multifaceted landscape of human health (Figure 1).Both our digestive processes and susceptibility to gastrointestinal disorders are profoundly influenced by the dynamic interplay between dietary components and the microbial inhabitants of the gastrointestinal tract (14). Food and bacteria as a complicated Tango The composition of our gut microbiota is directly influenced by the nutrients we consume.The gut microbiota can respond in various ways to different components of the diet, including carbohydrates, proteins, lipids, fibers, and specific bioactive chemicals.Complex carbohydrates, such as dietary fiber, serve as a crucial source of fuel for certain beneficial bacteria.The fermentation of fiber by these bacteria produces short-chain fatty acids (SCFAs)-which are organic acids produced by gut bacteria during the fermentation of dietary fiber (Table 1), which possess anti-inflammatory properties and contribute to gut health (15,16).Proteins from the diet can impact gut microbial diversity, with high-protein diets potentially encouraging the growth of bacteria that utilize amino acids, leading to the generation of harmful compounds.The types and quantities of fats in one's diet significantly affect the composition of the gut microbiome, potentially influencing bacterial imbalances linked to obesity and metabolic diseases (17).Foods rich in fiber and prebiotic ingredients sustain beneficial bacteria and foster a healthy microbiome, playing a crucial role in preventing and treating digestive disorders (18).Polyphenols and phytochemicals, plant-based molecules with antioxidant and antiinflammatory characteristics, can positively influence the gut flora (19). Consequences for digestive disorders The intricate dance between nutrition and gut flora has far-reaching effects on gastrointestinal disorders.Changes in gut microbial composition have been associated with conditions such as irritable bowel syndrome (IBS), Crohn's disease, ulcerative colitis, and gastroesophageal reflux disease (GERD) (20,21).Dietary habits can either exacerbate or alleviate symptoms, emphasizing the potential for controlling and preventing gastrointestinal diseases through personalized dietary approaches (22). Exposing the role of dietary fiber in feeding good bacteria The significance of dietary fiber in nurturing a robust gut flora and maintaining overall gut health is often underestimated despite its widely recognized positive effects on digestion.This humble substance is more than roughage; it is a vital component of the digestive system's orchestra, playing a crucial role in: Fiber serves as a powerful prebiotic, feeding beneficial bacteria in the digestive tract, producing SCFAs that reduce inflammation, fortify the gut barrier, and enhance overall gut health (23).Eating a variety of fiber-rich foods promotes a healthy balance of microorganisms in the gut (13).Insoluble fiber from whole grains and vegetables aids in relieving constipation by increasing stool volume, while soluble fiber from oats and lentils helps maintain regular bowel movements. The effect of fiber on digestive disorders Increased consumption of soluble fiber has been reported to improve symptoms in some individuals with IBS (24).Dietary fiber may positively impact inflammatory bowel disease (IBD) by altering the gut microbiome, although the extent varies based on the condition and circumstances.A high-fiber diet is associated with a lower risk of diverticular disease and related issues like diverticulitis.Diets rich in fiber are linked to a decreased risk of colorectal cancer, attributed to regular bowel movements and increased SCFA production (25). Dietary fiber plays a pivotal role in maintaining digestive tract health, significantly influencing the gut microbiome's composition and function.By incorporating a diverse range of high-fiber foods into our diets, including whole grains, fruits, vegetables, and legumes, we not only improve our health but also provide nourishment for the microbes within us (26).The intricate and mutually beneficial interaction between our food choices and the remarkable biosphere within us is best exemplified by the fiber-microbiome partnership (27). Studying the role of probiotic-rich foods and prebiotic fibers The nutritional conductors of gut health, probiotics and prebiotics, orchestrate a symphony of interactions among the microflora in the digestive tract.The gut microbiome, a dynamic ecosystem, is Dietary impact on gut microbiota composition.TABLE 1 Metabolites from food and their associations with gut microbial communities: additional review points. Metabolite Food Source Microbial association Short-chain fatty acids (SCFAs) Fiber-rich foods such as fruits, vegetables, and influenced by various dietary components performing unique yet interconnected roles.Probiotics, beneficial bacteria and live microorganisms, and prebiotics, indigestible dietary fibers, offer distinct benefits: Probiotics restore diversity and balance to the gut microbiota, introducing helpful bacterial strains.Probiotics outcompete pathogenic microbes, reducing the risk of infection and gastrointestinal distress.Some probiotics interact with the immune system beneficially, leading to more even and potentially less inflammatory immune responses.The potential modulation of nutritional and energy metabolism by probiotics (28,29). Feeding the philharmonic with prebiotics Prebiotics, indigestible dietary fibers providing food for beneficial bacteria, contribute to the synergy with probiotics: Prebiotics selectively target and increase the population of specific beneficial bacteria already present in the gut (30).The fermentation of prebiotics produces SCFAs, benefiting gut health by reducing inflammation, fortifying the gut barrier, and supplying energy for colonic cells.Certain prebiotic fibers help with gut motility by promoting regular bowel movements and relieving constipation.Combining prebiotics and probiotics produces a synergistic impact, enhancing health advantages by simultaneously nourishing beneficial microorganisms (31).The gut microbiota is most at peace and resilient when probiotics and prebiotics work together: Combining the benefits of probioticrich meals with prebiotic fibers fosters a more diversified and stable microbiome.Improved intestinal permeability defenses are possible, thanks to prebiotic fermentation contributing to SCFA production.Maintaining stability in the gut microbiota, despite dietary or environmental changes, is facilitated by the synergistic effects of probiotics and prebiotics, protecting against dysbiosis (32).The nutritional symphony benefiting the entire gut flora is produced when probiotics and prebiotics work in harmony.These factors, akin to conductors, lead the microbial orchestra to greater unity, variety, and resistance.Improving gut health by incorporating more probiotic-rich foods and prebiotic fibers highlights the interconnectivity of our food choices and the thriving world of microorganisms within us. The unsung hero in gut health Traditional diets have given way to Western diets, defined by the prevalence of processed and convenience foods.This dietary transformation has significantly impacted the food landscape, prompting increased interest in vegetarian and vegan diets with a focus on unprocessed, natural foods (33).Ongoing research delves into the consequences of these dietary choices on gastrointestinal health and function.The Western diet, characterized by its consumption of processed and sugary foods, initiates a cascade of consequences affecting gut health (34).Notably, a decrease in microbial diversity is observed, potentially contributing to abnormalities in the gut microbiota and an increased susceptibility to gastrointestinal diseases (35).Consuming processed foods high in sugar and unhealthy fats may induce a dysbiotic and inflammatory state in the body, with conditions like IBD and IBS linked to inflammation caused by dysbiosis (36).The gut barrier can be compromised due to a diet rich in sugar and fat, potentially leading to toxin absorption and immune system activation.Conditions associated with altered gut microbiota composition, such as obesity and metabolic syndrome, are exacerbated by Western dietary patterns.In contrast, diets rich in plant-based whole foods, including fruits, vegetables, legumes, and grains, demonstrate several positive effects on gut health (37).Plantbased diets are high in fiber and support a diverse and beneficial microbiome by providing sustenance to various types of beneficial gut bacteria.Whole plant diets exhibit anti-inflammatory effects, potentially alleviating gastrointestinal inflammation associated with conditions like inflammatory bowel disease.The fiber in plant-based diets contributes to a stronger intestinal barrier, reducing the absorption of toxic chemicals into the bloodstream.Weight management and metabolic health are positively influenced by plant-based diets, potentially reducing the risk of obesity-related gastrointestinal diseases.While the disparities between Western and plant-based diets in their impact on gut health are evident, moderation remains crucial (38).A nuanced, plant-based diet that incorporates minimally processed foods can be more effective than a strictly binary approach.Our digestive tract's state and overall health are intricately linked to the foods we consume.The conflict between Western diets, high in processed foods, and plant-based diets, rich in whole, natural foods, underscores the pivotal role of diet in shaping our gut microbiota and overall health.Plant-based diets, coupled with mindful consumption of processed foods, set the stage for a robust gut microbiota, reinforced gut barrier, and a harmonious connection between our dietary choices and the complex ecosystem within us (39). Gut microbiome's impact on gastrointestinal health The intricate balance of our intestines is profoundly affected by the gut microbiome, a teeming ecology of microorganisms that inhabits our digestive tract.Microbial changes, disrupting the equilibrium of this microbial population, have been linked to several gastrointestinal problems.The IBS, IBD, and gastroesophageal reflux disease (GERD) are all illnesses that may be exacerbated by these changes: The recent study recruited 100 participants with diverse dietary habits and gut microbiome profiles.Participants were randomly assigned to either a personalized dietary intervention group or a control group following a standard dietary recommendation.The personalized intervention group received individualized dietary plans based on their gut microbiome composition, determined through comprehensive metagenomic analysis.The dietary plans were tailored to optimize the growth of beneficial microbial species and reduce the abundance of potentially harmful microbes.After a 12-week intervention period, fecal samples were collected for microbiome analysis, and participants underwent clinical assessments to evaluate changes in gut health markers.The results demonstrated significant improvements in gut microbiome diversity, composition, and metabolic function in the personalized nutrition group compared to the control group.Moreover, participants in the personalized intervention group reported reduced gastrointestinal symptoms and improved overall well-being.These findings underscore the potential of precision nutrition approaches in promoting gut microbiome Microbial imbalances Catalysts for gastrointestinal disorders.Disturbances in the gut microbiota have been linked to IBS, a condition characterized by stomach pain, bloating, and altered bowel habits (41).IBD is characterized by persistent inflammation of the gastrointestinal system and includes Crohn's disease and ulcerative colitis (41).The microbial imbalance and decreased diversity known as dysbiosis are hallmarks of inflammatory bowel disease.The immune system's reaction to a dysbiotic pattern could worsen inflammation and hasten the development of disease.Acid reflux and heartburn are symptoms of GERD, which can be affected by changes in the microbiome.The synthesis of metabolites that have an impact on oesophagal health may be affected by bacterial imbalances in the gut (38).In addition, the lower oesophagal sphincter's function can be affected by these changes, heightening reflux symptoms. The link between microbial changes and GI problems is mediated in several ways, symptoms and progression of gastrointestinal diseases can be influenced by inflammation and immunological responses, both of which can be triggered by dysbiosis.Changes in gut microbiota composition can weaken the intestinal barrier, enabling potentially dangerous chemicals to enter the body and set off an immunological response.Changes in microbes can affect the production of metabolites such as SCFAs, which affect inflammation and gastrointestinal health.The creation of neurotransmitters may have consequences for disorders like irritable bowel syndrome, and gut microorganisms play a role in this process (42).Managing microbial changes for gastrointestinal well-being, probiotics and prebiotics are used to increase the growth of beneficial bacteria, restore microbial balance, and reduce symptoms.Low-FODMAP diets for IBS is one example of a dietary intervention that has shown potential in the management of IBS symptoms by targeting certain microbial imbalances (43).Tailoring therapy to specific microbial imbalances may be possible with personalized interventions based on an individual's gut microbiome profile.The function of the gut microbiome in gastrointestinal illnesses is becoming clearer as our understanding of this complex ecosystem grows.Conditions like bowel disease, and gastroesophageal reflux disease have been linked to changes in the microbiome.The substantial connection between our gut microbiota and gastrointestinal well-being is being uncovered by academics and healthcare practitioners as they gain a better knowledge of these dynamics. Decoding the brain-gut axis for holistic health The Brain-Gut Axis researches the intricate connection between the gut and the brain, often referred to as the "second brain" (44).This relationship unveils how our food choices impact both physical and mental health.Nutrition emerges as a pivotal player in shaping gut flora, influencing conditions like depression and anxiety.The microbiome in the intestines communicates with the central nervous system, producing metabolites, neurotransmitters, and immunological chemicals.Notably, certain stomach bacteria generate vital neurotransmitters such as serotonin and dopamine (45).A healthy gut flora, nurtured by a diet rich in fiber, prebiotics, and probiotics, contributes to the production of mood-regulating neurotransmitters (Figure 2) (46).Conversely, diets high in processed foods and sugars can induce inflammation, affecting both the gut and the brain and potentially leading to mood disorders (38,47).The intricate link between the gut and the brain underscores the profound impact of dietary choices on physical, mental, and emotional well-being, emphasizing the importance of embracing nutrient-dense diets and supporting a diverse gut flora for overall health (46). Gut microbiome's role in autoimmune diseases Autoimmune diseases often manifest in the gastrointestinal tract, with emerging evidence suggesting the crucial role of gut microbiome imbalances in their development (48).Celiac disease, characterized by gluten-triggered immune fibers reactions, showcases the interplay between genetics, the gut microbiome, and disease onset (49).Dysbiosis, or imbalances in the gut microbiome, can trigger immune dysregulation, creating a pro-inflammatory environment that intensifies the immune system's response to gluten.The consequential changes in microbial metabolite production and damage to the intestinal barrier can lead to conditions like a "leaky gut, " where microbial components pass through, triggering immune responses against the host's tissues (50,51).Altered microbial composition, reduced diversity, and abnormalities in bacterial groups are common in the gut microbiota of individuals with celiac disease (49).Recognizing the potential therapeutic role of gut microbiota modulation, interventions like probiotic supplements and dietary changes are explored.Tailoring treatments to individual microbial imbalances offers a promising avenue for managing autoimmune diseases, paving the way for innovative research and potential breakthroughs in understanding and treating these conditions. Strategies for reconciling gut microbes through diet Strategies for reconciling gut microbes through diet play a pivotal role in maintaining digestive tract health.The Low-FODMAP diet, proven effective in managing symptoms of conditions like IBS, targets fermentable carbohydrates to alleviate gastrointestinal issues.Introducing probiotics, beneficial microorganisms, to the diet modulates gut microbiota composition, enhances microbial diversity, and influences immune responses and gut barrier function (26,40).Prebiotics, found in foods like garlic and bananas, provide indigestible fibers that stimulate the growth of beneficial microorganisms, creating anti-inflammatory short-chain fatty acids (52).The Mediterranean diet, rich in fiber and polyphenol-rich foods, showcases potential benefits against gastrointestinal disorders by enhancing gut flora diversity and balance (53).These dietary interventions offer pathways to microbial reconciliation and improved gut health, underlining the significant role of food in shaping the complex ecosystem of the gut Tailoring diets to microbial masterpieces Precision nutrition is a groundbreaking scientific discipline reshaping our approach to health and wellness (Figure 3).By tailoring dietary recommendations to an individual's unique gut microbiota composition, this emerging field holds the potential to revolutionize the management of gut health and gastrointestinal diseases (55).The interplay of genetics, lifestyle, and gut flora significantly influences how a person responds to food, making personalized nutrition a comprehensive approach.The gut microbiota's impact extends beyond digestion, affecting various physiological and psychological functions.Profiling the microbiome through modern techniques like metagenomics allows researchers to understand its composition, abundance, and potential health implications.Armed with this microbiome information, healthcare providers can craft personalized nutritional advice, highlighting foods that support good bacteria or those that may disrupt balance (56).Precision nutrition emerges as a powerful tool in modifying the course of gastrointestinal disorders.For individuals with bowel syndrome, tailored dietary strategies based on their unique gut microbiota composition can enhance the effectiveness of interventions like the low-FODMAP diet (57).Similarly, personalized nutrition holds promise in managing IBD by addressing microbial imbalances and identifying specific dietary triggers, leading to reduced inflammation and improved symptom relief.Individuals with food sensitivities can also benefit from precision nutrition by identifying foods that impact their gut microbiota negatively (58).However, despite its potential, precision nutrition faces challenges such as the complexity of microbiome investigation and the need for extensive data interpretation.Direct correlations between the microbiome and health consequences are still under exploration, necessitating further research and development.The shift to precision nutrition for gastrointestinal health represents a departure from conventional approaches, allowing doctors to offer more tailored dietary advice based on an individual's unique microbiota.This approach exemplifies the synergy between advanced science and personalized care, holding the potential to revolutionize our understanding, treatment, and prevention of gastrointestinal disorders. Clinical indications and therapeutic crescendos Recent groundbreaking studies highlight the pivotal role of gut microbiota in shaping the future of gastrointestinal health.This expanding knowledge has paved the way for novel therapeutic approaches leveraging the ability to alter the gut microbiome through Exact probiotics Moving away from blanket approaches, recent probiotic developments focus on tailoring formulations to address unique microbial imbalances associated with individual illnesses.Probiotics designed to produce specific metabolites or regulate immune responses show promise in treating conditions (59). Fecal microbiota transplantation FMT involves transferring feces from a healthy donor to an individual with dysbiotic gut flora.Preliminary results suggest FMT could restore balance in patients with different clinical conditions prompting further investigation into its viability as a treatment (60). Microbial metabolites The fermentation products of gut bacteria, known as microbial metabolites, have diverse physiological consequences.Short-chain fatty acids (SCFAs), such as butyrate, are being studied for their antiinflammatory properties (7). Nutritional software for individual needs Technological progress is facilitating the implementation of individualized diet plans, contributing to the precision of dietary interventions (61). Artificial microorganisms Scientists are exploring the therapeutic engineering of microbial ecosystems, potentially leading to novel therapies using "designer microbiomes" engineered for specific tasks.Clostridium Cluster XIVa is under investigation for its potential to reduce inflammation and improve gastrointestinal health (62). Neuro-gut interventions Understanding the gut-brain axis opens avenues for new treatments for neurological disorders.Microbiome-targeted interventions, such as modulating neurotransmitters and stimulating the vagus nerve, show promise in addressing conditions like epilepsy and depression.The dynamic world of the gut microbiome's importance to the future of gastrointestinal health is undeniable.Advances in scientific understanding empower us to modify, engineer, and harness its healing potential.The ongoing journey holds promise for groundbreaking therapies that not only treat symptoms but also address microbial imbalances at the root of many gastrointestinal disorders (44,45).We stand on the brink of a new healthcare era where the microbial inhabitants of our gut become active partners in achieving optimal gut health and overall well-being.This microbial revolution, driven by precision, innovation, and interdisciplinary collaboration, signifies a transformative shift in healthcare toward personalized and effective treatments for gastrointestinal disorders. Conclusion The exploration of the intricate landscape of the human microbiome has revealed its profound impact on overall health.From the intricate interplay of nutrients shaping microbial diversity to the promising avenues of precision nutrition, this review underscores the evolving understanding of the gut's pivotal role in human well-being.As we witness the symphony of the gut microbiome, strategies for harmonizing microbial communities through dietary interventions offer tangible solutions for managing gastrointestinal health.Furthermore, the review highlights pioneering clinical approaches, including exact probiotics, fecal microbiota transplantation, and neuro-gut interventions, signaling a transformative shift in healthcare.These advancements not only represent scientific progress but also pave the way for personalized and effective treatments for gastrointestinal disorders.In this symbiotic dance between science and care, the microbial inhabitants of our gut emerge as active collaborators, guiding us toward optimal health.As we stand at the threshold of this new era, the review underscores the importance of precision, innovation, and interdisciplinary collaboration in shaping the future of gastrointestinal health and inspiring avenues for future research and clinical applications. FIGURE 1 FIGURE 1 microbiota.As research progresses, these approaches may revolutionize the treatment of gastrointestinal distress, providing personalized and effective strategies for individuals based on their unique microbial composition (54). FIGURE 2 FIGURE 2Brain-gut axis: impact of diet on mental health. FIGURE 3 FIGURE 3Precision nutrition in gastrointestinal health.
2024-06-01T15:19:17.749Z
2024-05-30T00:00:00.000
{ "year": 2024, "sha1": "ad9a48f64ce4c0cb008731351cbf5b09a6435923", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnut.2024.1395664/pdf?isPublishedV2=False", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "161174d05494fdf217cfb2ccfcf4e9bc6d95c857", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
254443841
pes2o/s2orc
v3-fos-license
Comparative Transcriptome Analysis Unravels the Response Mechanisms of Fusarium oxysporum f.sp. cubense to a Biocontrol Agent, Pseudomonas aeruginosa Gxun-2 Banana Fusarium wilt, which is caused by Fusarium oxysporum f.sp. cubense Tropical Race 4 (FOC TR4), is one of the most serious fungal diseases in the banana-producing regions in east Asia. Pseudomonas aeruginosa Gxun-2 could significantly inhibit the growth of FOC TR4. Strain Gxun-2 strongly inhibited the mycelial growth of FOC TR4 on dual culture plates and caused hyphal wrinkles, ruptures, and deformities on in vitro cultures. Banana seedlings under pot experiment treatment with Gxun-2 in a greenhouse resulted in an 84.21% reduction in the disease. Comparative transcriptome analysis was applied to reveal the response and resistance of FOC TR4 to Gxun-2 stress. The RNA-seq analysis of FOC TR4 during dual-culture with P. aeruginosa Gxun-2 revealed 3075 differentially expressed genes (DEGs) compared with the control. Among the genes, 1158 genes were up-regulated, and 1917 genes were down-regulated. Further analysis of gene function and the pathway of DEGs revealed that genes related to the cell membrane, cell wall formation, peroxidase, ABC transporter, and autophagy were up-regulated, while down-regulated DEGs were enriched in the sphingolipid metabolism and chitinase. These results indicated that FOC TR4 upregulates a large number of genes in order to maintain cell functions. The results of qRT-PCR conducted on a subset of 13 genes were consistent with the results of RNA-seq data. Thus, this study serves as a valuable resource regarding the mechanisms of fungal pathogen resistance to biocontrol agents. Introduction Banana Fusarium wilt is the most serious soilborne fungus disease caused by Fusarium oxysporum f.sp. cubense (FOC TR4), which has a wide range of occurrence and is difficult to control [1]. It invades the xylem tissues of roots and spreads through the vascular system of pseudostems, causing plant death. In severe cases, there is significant banana yield reduction or even extinction [2]. In recent years, banana wilt disease has occurred on a large scale in banana planting areas, and it has caused a devastating blow to the banana planting industry in China [3]. Despite the successful use of chemical fungicides to control banana wilt and increase yield, this method also causes environmental pollution and health problems. Biological control can effectively control banana wilt and inhibit pathogen growth through the use of microorganisms or their secondary metabolites. Many Bacillus and Pseudomonas strains have been widely investigated and used because of their direct or indirect beneficial effects of Effects of P. aeruginosa Gxun-2 on the Growth and Morphology of FOC TR4 After incubation at 28 • C for 7 days, Gxun-2 significantly inhibited the growth of FOC TR4 with a significant inhibition zone ( Figure 1A,C), and the inhibition rate sometimes reached 75.25%. In addition, the observation results from scanning electron microscopy (SEM) showed the abnormal growth of hypha. The hypha of FOC TR4 without inoculation of Gxun-2 (control group, CG) grew vigorously with a uniform thickness and smooth texture. However, the hypha of FOC TR4 dual-culture with Gxun-2 (treated group, TG) grew disorderedly, with rupture, curves, knots, and swellings. Results show that Gxun-2 strain could inhibit the growth of FOC TR4 and change its hypha morphology. The pot experiment showed that the incidence of banana Fusarium wilt when treated with Gxun-2 was significantly reduced, and this control effect can reach 84.21% (Figure 1B,D). Furthermore, the longitudinal section of banana bulbs in the control group (CG) was browned, but little browning was observed in the bulbs of the group treated with Gxun-2 (TG), indicating that Gxun-2 has an inhibitory effect on FOC TR4. 1B,D). Furthermore, the longitudinal section of banana bulbs in the control group (CG) was browned, but little browning was observed in the bulbs of the group treated with Gxun-2 (TG), indicating that Gxun-2 has an inhibitory effect on FOC TR4. The strain Gxun-2 showed an evident orange halo on the CAS test plate ( Figure 2A) and produced phenolate-type siderophores as well as phosphorus solubilizing activity ( Figure 2B). The antifungal crude extract produced by Gxun-2 was collected after extraction and rotary evaporation. Thin-layer chromatography results showed four major spots under a 254 nm UV lamp, with one spot corresponding to the spot of the phenazine-1carboxylic acid standard, with an Rf (retention factor) value of 0.83. Therefore, we deduced that the effective component of the substance with antagonistic activity produced by Gxun-2 was phenazine-1-carboxylic acid ( Figure 2C). The strain Gxun-2 showed an evident orange halo on the CAS test plate ( Figure 2A) and produced phenolate-type siderophores as well as phosphorus solubilizing activity ( Figure 2B). The antifungal crude extract produced by Gxun-2 was collected after extraction and rotary evaporation. Thin-layer chromatography results showed four major spots under a 254 nm UV lamp, with one spot corresponding to the spot of the phenazine-1-carboxylic acid standard, with an Rf (retention factor) value of 0.83. Therefore, we deduced that the effective component of the substance with antagonistic activity produced by Gxun-2 was phenazine-1-carboxylic acid ( Figure 2C). 1B,D). Furthermore, the longitudinal section of banana bulbs in the control group (CG) was browned, but little browning was observed in the bulbs of the group treated with Gxun-2 (TG), indicating that Gxun-2 has an inhibitory effect on FOC TR4. The strain Gxun-2 showed an evident orange halo on the CAS test plate ( Figure 2A) and produced phenolate-type siderophores as well as phosphorus solubilizing activity ( Figure 2B). The antifungal crude extract produced by Gxun-2 was collected after extraction and rotary evaporation. Thin-layer chromatography results showed four major spots under a 254 nm UV lamp, with one spot corresponding to the spot of the phenazine-1carboxylic acid standard, with an Rf (retention factor) value of 0.83. Therefore, we deduced that the effective component of the substance with antagonistic activity produced by Gxun-2 was phenazine-1-carboxylic acid ( Figure 2C). Response Pattern of FOC TR4 to P. aeruginosa Gxun-2 Stress In order to reveal the changes in the gene expression level of FOC TR4 after exposure to Gxun-2 stress, total RNA was extracted from an FOC TR4 dual-culture with Gxun-2 and wild-type FOC TR4 mycelia, respectively. Three replicates of the total RNA for the control group (CG) and the group treated with Gxun-2 (TG) were collected for the reverse transcription of mRNA into cDNA. More than 50 million clean reads were obtained after filtration. The error rate of each sample was less than 0.0271%, and the GC content was 51.61-53.48%. The Q20 and Q30 were greater than 97.2 and 93.26, respectively (Supplementary Table S1). The total map rate was higher than 80.03% in each sample (Supplementary Table S2), and the correlation coefficients were higher than 0.6048-0.8387 between TG and CG (Supplementary Table S3). A total of 9901 expressed genes were collected in CG, while 9211 were collected in TG. The co-expressed genes between the two samples were 8795. Among them, 1106 and 416 genes were specifically expressed in control and treated, respectively ( Figure 3). DESeq2 was used to test differentially expressed genes (DEGs) between the samples compared with CG, and 3075 genes in TG were significantly differentially expressed, including 1158 upregulated and 1917 down-regulated genes ( Figure 4). In order to reveal the changes in the gene expression level of FOC TR4 after exposure to Gxun-2 stress, total RNA was extracted from an FOC TR4 dual-culture with Gxun-2 and wild-type FOC TR4 mycelia, respectively. Three replicates of the total RNA for the control group (CG) and the group treated with Gxun-2 (TG) were collected for the reverse transcription of mRNA into cDNA. More than 50 million clean reads were obtained after filtration. The error rate of each sample was less than 0.0271%, and the GC content was 51.61-53.48%. The Q20 and Q30 were greater than 97.2 and 93.26, respectively (Supplementary Table S1). The total map rate was higher than 80.03% in each sample (Supplementary Table S2), and the correlation coefficients were higher than 0.6048-0.8387 between TG and CG (Supplementary Table S3). A total of 9901 expressed genes were collected in CG, while 9211 were collected in TG. The co-expressed genes between the two samples were 8795. Among them, 1106 and 416 genes were specifically expressed in control and treated, respectively ( Figure 3). DESeq2 was used to test differentially expressed genes (DEGs) between the samples compared with CG, and 3075 genes in TG were significantly differentially expressed, including 1158 up-regulated and 1917 down-regulated genes ( Figure 4). Differential Expression Analysis and GO, KEGG Enrichment Analysis Gene ontology (GO) was used to classify the functions of DEGs, including three basic categories, namely, biological process (BP), cellular component (CC), and molecular function (MF). On this basis, it can be divided into 30 subcategories, among which the three most populated terms in the BP category were biological (1292 genes), metabolic (927 genes) and oxidation-reduction process (272 genes). The integral component of membrane and the intrinsic component of membrane (790 genes) were the genes enriched in the CC category. The MF category mainly involved catalytic activity (1241 genes) and transferase activity (345 genes) ( Figure 5). The genomes of 3075 DEGs were further analysed using the Kyoto Encyclopedia of Genes and Genomes (KEGG) database, and 116 pathways were enriched in five branches, including metabolism, genetic information processing, environmental information processing, organic system, and cellular process. Among them, the DEGs enriched in the metabolism were dominant (213 genes). The main pathways were related with arginine and proline metabolism (map00330), tyrosine metabolism (map00350), amino sugar and nucleotide metabolism (map00520), glycine, serine and threonine metabolism (map00260), tryptophan metabolism (map00380), glycerophospholipid metabolism (map00564), steroid biosynthesis (map00100), and peroxidase bodies (map04146) enriched in KEGG ( Figure 6). Th axis corresponds to the mean expression value of log10 (p-value), and the x-axis displays the fold change value. Grey dots: non-differential expressed genes. Red dots: up-regulated genes. G dots: downregulated genes. Differential Expression Analysis and GO, KEGG Enrichment Analysis Gene ontology (GO) was used to classify the functions of DEGs, including three b categories, namely, biological process (BP), cellular component (CC), and molecular fu tion (MF). On this basis, it can be divided into 30 subcategories, among which the th most populated terms in the BP category were biological (1292 genes), metabolic ( genes) and oxidation-reduction process (272 genes). The integral component of m brane and the intrinsic component of membrane (790 genes) were the genes enriched the CC category. The MF category mainly involved catalytic activity (1241 genes) transferase activity (345 genes) ( Figure 5). The y-axis corresponds to the mean expression value of log10 (p-value), and the x-axis displays the log2 fold change value. Grey dots: non-differential expressed genes. Red dots: up-regulated genes. Green dots: downregulated genes. Antioxidant Activity-Related DEGs Phenazine and its derivatives are the major determinants of biological control produced by P. aeruginosa ( Figure 2B), and they can effectively inhibit fungal growth [12][13][14]. Phenazine can insert into a membrane and act as a reducing agent to transfer electrons to target cells, causing oxidative damage or even death by increasing superoxide free radicals in cells [15]. The antioxidant enzymes in FOC TR4 are the key enzymes that resist Gxun-2 stress. In the present study, two upregulated genes, namely, FOXG_15294 and FOXG_03076, were collected from the antioxidant system of the peroxisome pathway. These two upregulated genes may be related to the improved tolerance of FOC TR4 against oxidative damage and the repair of damage caused by oxidative free radicals [16,17]. ABC transporters are also a key detoxification factor, as they excrete toxic substances [18]. Five ABC transporter-related genes, namely FOXG_17197, FOXG_04837, FOXG_07972, FOXG_12952, and FOXG_15452, were significantly upregulated, which may help FOC TR4 to cope with the oxidative damage caused by phenazine. The genomes of 3075 DEGs were further analysed using the Kyoto Encyclopedia o Genes and Genomes (KEGG) database, and 116 pathways were enriched in five branches including metabolism, genetic information processing, environmental information pro cessing, organic system, and cellular process. Among them, the DEGs enriched in the me tabolism were dominant (213 genes). The main pathways were related with arginine and proline metabolism (map00330), tyrosine metabolism (map00350), amino sugar and nu cleotide metabolism (map00520), glycine, serine and threonine metabolism (map00260) tryptophan metabolism (map00380), glycerophospholipid metabolism (map00564), ster oid biosynthesis (map00100), and peroxidase bodies (map04146) enriched in KEGG (Fig ure 6). The genomes of 3075 DEGs were further analysed using the Kyoto Encyc Genes and Genomes (KEGG) database, and 116 pathways were enriched in five including metabolism, genetic information processing, environmental inform cessing, organic system, and cellular process. Among them, the DEGs enriched tabolism were dominant (213 genes). The main pathways were related with ar proline metabolism (map00330), tyrosine metabolism (map00350), amino sug cleotide metabolism (map00520), glycine, serine and threonine metabolism (m tryptophan metabolism (map00380), glycerophospholipid metabolism (map0 oid biosynthesis (map00100), and peroxidase bodies (map04146) enriched in K ure 6). Cell Wall Synthesis-Related DEGs Chitin, glucans, mannans, and glycoproteins are important components of the fungal cell wall [19]. In TG, the genes FOXG_10061, FOXG_05078, and FOXG_05290, which encode chitin synthetase, a class of glycosyltransferases that catalyse the synthesis of chitin, were downregulated in the amino sugar and nucleoside sugar metabolism pathways. Chitinase is mainly responsible for the catalytic decomposition of chitin, and five genes that encode chitinase (FOXG_09583, FOXG_12882, FOXG_00277, FOXG_11492, and FOXG_15329) were also downregulated in TG. The downregulated expression of these genes may allow for the fine-tuning of FOC TR4 under Gxun-2 stress. The glucan in the fungal cell wall is mostly (1,3)-beta-glucan, whose deficiency in the cell wall will affect the growth of cells and eventually lead to cell rupture [20]. The gene FOXG_03721, related to the synthesis of (1,3)-beta-glucan in starch and the sucrose metabolism pathway, was significantly upregulated, while FOXG_01250, which is responsible for encoding (1,3)-beta-glucanase, was downregulated. The completely opposite expression of these two genes might be related to the reduction of (1,3)-beta-glucan consumption. Both fine-tuning processes are responses meant to mitigate cell wall damage. Cell Membrane Synthesis-Related DEGs Fatty acids play an important role in forming the fungal plasma membrane and maintaining cell mobility [21]. In the present study, the DEGs (FOXG_16631, FOXG_05107, FOXG_16631, FOXG_05107, FOXG_05756, and FOXG_10933) related to fatty acid synthesis were all downregulated in TG. In addition, the gene FOXG_01555, which encodes fatty acid elongase, was upregulated, and this enzyme was mainly responsible for fatty acid chain extension. Sphingolipid is also an important component of fungal cell membranes. In the TG, six downregulated genes, namely, FOXG_00989, FOXG_05578, FOXG_15265, FOXG_10269, FOXG_02604, and FOXG_09964, were enriched in the sphingolipid metabolism pathway. The down-regulation of these genes will affect the function of the cell membrane, such as the transport of plasma membrane [22]. The sterols enriched in the plasma membrane are a critical lipid, which is an indispensable component of all eukaryotic cells [23]. Therefore, sterols can affect various membranerelated functions as a membrane component, such as maintaining the permeability and flow of the cell membrane [24]. In TG, 17 DEGs were identified in the steroid biosynthesis pathway, of which 13 genes (FOXG_11545, FOXG_02348, FOXG_08223, FOXG_03780, FOXG_06186, FOXG_03780, FOXG_06186, FOXG_15629, FOXG_01590, FOXG_10530, FOX-G_09168, FOXG_04166, and FOXG_05355) were significantly upregulated in the ergosterol pathway. FOC TR4 increases the efficiency of ergosterol synthesis by up-regulating the genes mentioned above. Autophagy-Related DEGs The autophagy pathway in fungi is involved in nutrient recycling under stress [25]. In the present study, five upregulated genes were related to autophagy in FOC TR4, namely FOXG_00582, FOXG_08160, FOXG_15833, FOXG_01950, and FOXG_10507, which encode autophagy-related proteins, namely Atg9, Sch9, Vps33, Vps45, and Pep4, respectively. These upregulated genes could improve the clearance of damaged cell structures and organelles in order to guarantee the normal growth of FOC TR4. Validation of RNA-Seq Sequencing Thirteen DEGs for qRT-PCR were involved in the cell wall and membrane structure and antioxidant and autophagy of FOC TR4. These results are in agreement with the RNA-Seq high-throughput sequencing data (Figure 7), indicating a similar expression pattern of up-and downregulated genes to those in RNA-Seq sequencing, and they could be used for further analysis. Thirteen DEGs for qRT-PCR were involved in the cell wall and membrane structu and antioxidant and autophagy of FOC TR4. These results are in agreement with the RNA Seq high-throughput sequencing data (Figure 7), indicating a similar expression patte of up-and downregulated genes to those in RNA-Seq sequencing, and they could be use for further analysis. Discussion P. aeruginosa Gxun-2 could secrete various secondary metabolites, such as sider phores and PCA, which are essential to sustaining its ecological adaptability and surviva They could also inhibit the growth of a fungal pathogen through different antifung mechanisms [26,27]. In addition, Gupta et al., demonstrated that P. aeruginosa could pr duce chitinase [28], while Peng and Sharon found that the metabolites of P. aerugino contained lipase and could inhibit the synthesis of fungal cell membrane and cell wa [29,30]. Therefore, the results of DEGs mainly focus on the significant changes in th Discussion P. aeruginosa Gxun-2 could secrete various secondary metabolites, such as siderophores and PCA, which are essential to sustaining its ecological adaptability and survival. They could also inhibit the growth of a fungal pathogen through different antifungal mechanisms [26,27]. In addition, Gupta et al., demonstrated that P. aeruginosa could produce chitinase [28], while Peng and Sharon found that the metabolites of P. aeruginosa contained lipase and could inhibit the synthesis of fungal cell membrane and cell wall [29,30]. Therefore, the results of DEGs mainly focus on the significant changes in the synthesis of the cell wall and membrane, antioxidant damage, and autophagy in FOC TR4 cells (Figure 8). In the present study, the growth of FOC TR4 was significantly inhibited under Gxun-2 stress. At the subcellular level, the growth of FOC TR4 hyphae dual-cultured with Gxun-2 was abnormal. Distortion, bifurcation, and knotting were observed, and they were possibly caused by antibacterial substances produced by Gxun-2. This result supports the hypothesis that hyphal growth is inhibited when fungi suffer from oxidative damage [31][32][33]. Peroxisomes are core organelles in eukaryotes where superoxide dismutase (SOD) and catalase (CAT) are both generated and detoxified [34]. Two genes, namely FOXG_15294 and FOXG_03076, of the antioxidant system in the peroxidase pathway, which encode CAT and superoxide dismutase 1 (SOD1), respectively, were upregulated. These two enzymes are important antioxidant enzymes in organisms that can repair the damage caused by free radicals [16,17]. When FOC TR4 is induced to produce oxidative stress by phenazine substances, superoxide free radicals in the cells increase, and the up-regulation of these two genes may be one of the important response reactions for FOC TR4 to reduce the damage under oxidative stress. Moreover, FOXG_04389, which encodes superoxide dismutase 2 (SOD2), was downregulated, possibly because SOD2 is a superoxide dismutase of Fe-Mn family [35], which needs to be combined with iron when catalysing the reaction. If the iron concentration in the cells' changes, then the activity of SOD2 will be affected. The lack of iron in FOC TR4 can be attributed to the siderophore released by Gxun-2 and causes the decrease of SOD2 activity. Therefore, in response to oxidative damage, FOC TR4 downregulated the expression of SOD2 gene but upregulated the expression of the SOD1 gene of the Cu-Zn family. In addition, the ABC transporter is involved in the anti-oxidative stress response, and the ABC transporter has the detoxification function of expelling toxic substances [18]. The significant up-regulation of the five ABC transporter genes may coordinate the exclusion of oxidative radicals in response to the oxidative damage caused by phenazine substances. In the present study, the growth of FOC TR4 was significantly inhibited 2 stress. At the subcellular level, the growth of FOC TR4 hyphae dual-cultur 2 was abnormal. Distortion, bifurcation, and knotting were observed, and t sibly caused by antibacterial substances produced by Gxun-2. This resul hypothesis that hyphal growth is inhibited when fungi suffer from oxidativ 33]. Peroxisomes are core organelles in eukaryotes where superoxide dis and catalase (CAT) are both generated and detoxified [34]. Two g FOXG_15294 and FOXG_03076, of the antioxidant system in the peroxid which encode CAT and superoxide dismutase 1 (SOD1), respectively, wer These two enzymes are important antioxidant enzymes in organisms that damage caused by free radicals [16,17]. When FOC TR4 is induced to prod stress by phenazine substances, superoxide free radicals in the cells increas regulation of these two genes may be one of the important response reac TR4 to reduce the damage under oxidative stress. Moreover, FOXG_04389, superoxide dismutase 2 (SOD2), was downregulated, possibly because SO oxide dismutase of Fe-Mn family [35], which needs to be combined with ir The four genes encoding chitin synthase in the amino sugar and nucleoside sugar metabolic pathways were downregulated, and chitin synthase is a glycosyltransferase that catalyzes the synthesis of chitin [36]. The chitin-biosynthesis capability of FOC TR4 was decreased according to the downregulation of several genes. As a major component of the fungal cell wall, the insufficient synthesis of chitin may lead to changes in the structure and function of the cell wall. The genes FOXG_09583, FOXG_12882, FOXG_00277, FOXG_11492, and FOXG_15329, which encode chitinase are mainly responsible for decomposing chitin. These genes were downregulated, which possibly reduced the damage to the cell wall. The cell inner wall skeleton is composed of beta-glucan and chitin, playing the role of a flexible viscoelastic framework and determining the shape and strength of the cell wall to a large extent. The most abundant beta-glucan in the fungal cell wall is (1,3)-beta-glucan, which makes up between 65% and 90% of the whole beta-glucan content [37]. The antifungal metabolite produced by Gxun-2 could inhibit the synthesis of the cell wall and thus cause the cell wall to rupture. In this study, hypha curving and rupture were also observed. Therefore, FOC TR4 significantly upregulated the gene FOXG_03721 encoding (1,3)-beta-glucan synthase; a key enzyme in (1,3)-beta-glucan synthesis, FOC TR4 reduced cell wall damage by up-regulating (1,3)-beta-glucan synthase under the inhibition of Gxun-2. At the same time, FOXG_01250, which encodes (1,3)-beta-glucanase, was downregulated. (1,3)-Beta-glucanase mainly releases glucose by hydrolysing the nonreducing end of (1,3)-beta-glucan [38]. The upregulated gene FOXG_01250 in TG was possibly an attempt to alleviate the damage to the cell wall caused by the consumption of (1,3)-beta-glucan. Cell deformation and hyphae deformation occur when the chitin or dextran is inadequate to support the pressure of the contents [39]. The two processes above have completely opposite responses to Gxun-2 stress, suggesting that FOC TR4 alleviated the biological stressors by adjusting gene expression. However, the differential expression of these genes did not change the inhibitory state in FOC TR4 caused by Gxun-2. We still observed the abnormal growth of hyphae, such as bending and knotting, which supports that P. aeruginosa inhibits Aspergillus fumigatus growth by blocking (1,3)-beta-glucanase activity, thus altering the cell wall architecture [40]. Fatty acids are among the important fungal cell membrane structural components that maintain the morphological structure and biological function of the cell membrane. In the present study, one of the four DEGs encoding ELO3 was downregulated, a fatty acid chain elongase that is mainly responsible for fatty acid chain elongation [41]. The structure and function of the cell membrane of FOC TR4 may also be inhibited to some extent. Ergosterol (ERG), as a crucial part of the fungal cell membrane, is a critical sterol in the cell membranes of fungi. It is mainly responsible for maintaining the stability and fluidity of the cell membrane [39,42]. ERG biosynthesis is tightly regulated by 25 known enzymes along the ERG production pathway. Among them, Erg11 and Erg5 are two crucial enzymes in ergosterol synthesis. The expression levels of two key genes (FOXG_13138 and FOXG_04166) that encode Erg11 and Erg5 were upregulated through ergosterol synthesis in FOC TR4. The activities of these two enzymes decreased in FOC TR4 under the stress of iron deficiency. Thus, FOC TR4 up-regulates these genes in order to mitigate the damage from iron deficiency. Two fungicides, namely terbinafine and naftifine, target ergosterol synthase and eventually lead to the cracking and death of fungal cells [43]. The ergosterol synthesis pathway depends on iron in four enzymatic steps, which include the two enzymes Erg5 and Erg11 encoded by FOXG_04166 and FOXG_11545. Erg5 catalyses the biosynthesis of ergosta 5, 7, 22, 24 (28)-trienol, while 4,4-dimethyl-ergosta 8, 14, 24 (28)-trienolis, the direct product catalysed by Erg11. These two iron-dependent reactions are mainly involved in the process of ergosterol synthesis, and are responsible for the second and penultimate reactions of the ergosterol biosynthesis pathway (Figure 9). Siderophores produced by Gxun-2 are natural iron chelators with low molecular secondary metabolites between 200-2000 Da and a high affinity for Fe 3+ [44]. They are secreted in order to acquire iron as needed by P. aeruginosa, which results in reduced available iron in the environment. The activities of these two iron-dependent enzymes in FOC TR4 are also reduced, leading to the accumulation and depletion of the compounds of the intermediate steps, respectively. Shakoury-Elizeh et al., found that compared with iron-replete cells, iron-deficient cells exhibited a 24-and 3.3-fold accumulation of squalene and lanosterol, respectively. They also exhibited depletion of the major oxysterols, namely zymosterol (3-fold) and ergosterol (2-fold) [45]. Autophagy is a ubiquitous and non-selective degradation process in eukaryotic cells which is conserved from yeast to human. It is a physiological mechanism that promotes the turnover of cell macromolecules and organelles through a lysosomal degradative pathway in order to maintain cell homeostasis. Five upregulated genes are related to the autophagy pathway in TG. Among them, ATG 9 is an interleaflet lipid transport protein that drives autophagy [46]. Sch 9 signaling pathways regulate the induction of autophagy in yeast [47]. Vps33 is involved in multiple autophagy-related pathways of vesicular trafficking to the vacuole and in the maintenance of vacuolar integrity [48]. Vps45 is required for membrane fusion in autophagy [49]. Pep4 is a proteinase A that limits the proteolytic capacity of the vacuole in a substrate-dependent manner [50]. The upregulated genes above may increase the phagocytosis and degradation of the oxidative material in order to reduce oxidative damage as a response of FOC TR4 when damaged by the antifungal substances secreted by Gxun-2. Int. J. Mol. Sci. 2022, 23, x FOR PEER REVIEW respectively. Shakoury-Elizeh et al., found that compared with iron-replete cells, i ficient cells exhibited a 24-and 3.3-fold accumulation of squalene and lanosterol, tively. They also exhibited depletion of the major oxysterols, namely zymosterol and ergosterol (2-fold) [45]. Autophagy is a ubiquitous and non-selective degradation process in eukaryo which is conserved from yeast to human. It is a physiological mechanism that pr the turnover of cell macromolecules and organelles through a lysosomal degr pathway in order to maintain cell homeostasis. Five upregulated genes are related autophagy pathway in TG. Among them, ATG 9 is an interleaflet lipid transport that drives autophagy [46]. Sch 9 signaling pathways regulate the induction of auto in yeast [47]. Vps33 is involved in multiple autophagy-related pathways of vesicul ficking to the vacuole and in the maintenance of vacuolar integrity [48]. Vps45 is re for membrane fusion in autophagy [49]. Pep4 is a proteinase A that limits the pro capacity of the vacuole in a substrate-dependent manner [50]. The upregulated above may increase the phagocytosis and degradation of the oxidative material in According to the comparative transcriptome analysis, FOC TR4 regulated multiple genes in various pathways in order to reduce the damage to the fungal cell wall, cell membrane, and the oxidative compounds toxic to cells caused by antifungal substances, including siderophore, phenazine, and chitinase produced by P. aeruginosa Gxun-2 ( Figure 10). According to the comparative transcriptome analysis, FOC TR4 regulated multiple genes in various pathways in order to reduce the damage to the fungal cell wall, cell mem brane, and the oxidative compounds toxic to cells caused by antifungal substances, in cluding siderophore, phenazine, and chitinase produced by P. aeruginosa Gxun-2 ( Figure 10). Strains and Culture Conditions P. aeruginosa Gxun-2 was isolated from the rhizosphere of healthy banana in Guangx province, China. It is preserved in the Guangdong Microbiol Culture Collection Cente (GDMCC No.61615). Fusarium oxysporum f.sp.cubense with Tropical Race 4 (FOC TR4 TR4 and other pathogenic fungi were obtained from the Guangxi Academy of Agricultura Sciences, Nanning, China. They were cultured on potato dextrose agar (PDA) medium a 30 °C for 7 d. Inhibition of P. aeruginosa Gxun-2 to F. oxysporum and Its Mechanism The inhibitory effect of Gxun-2 on the hyphal growth of FOC TR4 was tested using the co-culture method. First, FOC TR4 was inoculated into the center of PDA plate, and streaks of Gxun-2 four 7.0 cm in length were inoculated at a distance of 2.5 cm from the FOC TR4. The PDA plate inoculated with FOC TR4 was used as the control. Three repli cates were set for each group, and the cells were incubated at a constant temperature o 28 °C for 7 days [10]. The inhibitory effect of the antagonistic strain on the pathogenic fungus was observed in order to calculate the inhibition rate, and the morphology of the hyphae at the colony edge of the pathogenic bacteria was observed under a scanning elec tron microscope (SEM). Inhibition of mycelial radial rate = Σ (average diameter of target colony in contro group-average diameter of target colony in experimental group)/average diameter of tar get colony in control group × 100% [51]. The pot experiment was conducted following the methods of Chen et al.; the disease grade was determined according to the browning degree of the longitudinal section of the banana seedling corm. Grade 0: no browning occurred in the longitudinal section of the Inhibition of P. aeruginosa Gxun-2 to F. oxysporum and Its Mechanism The inhibitory effect of Gxun-2 on the hyphal growth of FOC TR4 was tested using the co-culture method. First, FOC TR4 was inoculated into the center of PDA plate, and streaks of Gxun-2 four 7.0 cm in length were inoculated at a distance of 2.5 cm from the FOC TR4. The PDA plate inoculated with FOC TR4 was used as the control. Three replicates were set for each group, and the cells were incubated at a constant temperature of 28 • C for 7 days [10]. The inhibitory effect of the antagonistic strain on the pathogenic fungus was observed in order to calculate the inhibition rate, and the morphology of the hyphae at the colony edge of the pathogenic bacteria was observed under a scanning electron microscope (SEM). Inhibition of mycelial radial rate = Σ (average diameter of target colony in control group-average diameter of target colony in experimental group)/average diameter of target colony in control group × 100% [51]. The pot experiment was conducted following the methods of Chen et al.; the disease grade was determined according to the browning degree of the longitudinal section of the banana seedling corm. Grade 0: no browning occurred in the longitudinal section of the corm; Level 1: browning area in longitudinal section of corm ≤ 25%; Level 3: 25% < browning area in longitudinal section of bulb ≤ 50%; Level 5: 50% < browning area in longitudinal section of bulb ≤ 75%; Level 7: browning area of corm longitudinal section > 75% [52]. Disease index = Σ (condition level × number of diseased plant of this level)/(the highest level of disease × total number of disease); Control effect (%) = (CG disease index − treatment disease index)/CG disease index × 100% [53]. The siderophores were detected using the chrome azurol S (CAS) solid medium [54], and Arnow and ferric perchlorate assays were used to detect catechol siderophores and hydroxamates [55,56]. Phenazine -1-carboxylic acid (PCA) was isolated and extracted as described by Palchevskaya et al. [57]. Analysis of components was conducted on the basis of Rf (retention factor) values [58]. Preparation of FOC TR4 Mycelium and RNA Extraction The confrontation experiment of Gxun-2 and FOC TR4 was carried out as described above, and the medium was incubated at a constant temperature of 28 • C for 7 days. The mycelia from the Gxun-2-exposed boundary of the dual-culture PDA plate were collected and labelled as the treatment group (TG), while the mycelia collected from wild-type FOC TR4 were labelled as control group (CG). Both the TG and CG had three biological replicates. Collected samples were lyophilised with liquid nitrogen and stored at −80 • C for RNA extraction. Preparation of FOC TR4 Mycelium and RNA Extraction Total RNA was extracted from the tissue using TRIzol ® Reagent according to the manufacturer's instructions (Invitrogen, Waltham, MA, USA), and genomic DNA was removed using DNase I (TaKara, Shanghai, China). Then, RNA quality was determined using 2100 Bioanalyser (Agilent Technologies, Santa Clara, CA, USA) and quantified using ND-2000 (NanoDrop Technologies, Waltham, MA, USA). An RNA-seq transcriptome library was prepared following the instructions of the TruSeqTM RNA sample preparation kit from Illumina (San Diego, CA, USA). The processing of original images to sequences, base-calling, and quality value calculations were performed using the Illumina GA Pipeline (version 1.6) by Shanghai Majorbio Bio-pharm Technology Co., Ltd. (Shanghai, China), in which 150 bp paired-end reads were obtained. RT-qPCR Assay In order to verify the reliability of DEGs from the transcriptome sequencing of FOC TR4, we selected 13 genes for qRT-PCR validation with the actin gene as the internal reference gene. Primer (Version 5.0, Davis, CA, USA) was used to design the primers of 13 selected DEGs. Primer (Version 5.0) was used to design the primers of 13 selected DEGs (Table 1). PCR amplification was performed using the BIO-RAD system (Hercules, CA, USA), and the expression analysis was carried out using built-in software. The reaction system consisted of cDNA 1 µL, 10 µM forward PCR primer 0.5 µL, 10 µM reverse PCR primer 0.5 µL, BlasTaqTM 2X qPCR MM1 10 µL, and nuclease-free H 2 O 8 µL, totaling 20 µL. The PCR program was as follows: 95 • C for 180 s, 40 cycles of (95 • C for 15 s, 60 • C for 1 min). A dissolution curve was then generated. The qPCR for each gene was repeated three times, and the average (Ct) was calculated. The relative expression level of each gene was calculated using the 2-∆∆Ct method [59]. Conclusions In summary, the soil bacterium P. aeruginosa Gxun-2 was a strong inhibitor of FOC TR4 growth both in vitro and in vivo, resulting in the significant control of banana Fusarium wilt. An examination of FOC TR4 exposed to Gxun-2 showed damage to cell walls and membranes. Comparative transcriptome analysis showed that FOC TR4 can respond and attempt to alleviate damage caused by P. aeruginosa. It attempted to increase cell wall and membrane synthesis, antioxidant responses, detoxification, and export of antifungal substances from its cells. However, these processes were unable to prevent cell damage, and their ability to cause Fusarium wilt was greatly compromised with Gxun-2. These results lay the foundation for comprehensively expounding the mechanism of interaction between FOC TR4 and P. aeruginosa. They also provide an effective strategy for the control of Fusarium wilt in banana. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/ijms232315432/s1, Table S1: Sequencing data Statistics of F. oxysporum with and without P. aeruginosa; Table S2: Spearman's correlation coefficients of F. oxysporum with and without P. aeruginosa Gxun-2 suppression; Table S3: Correlation coefficients of different samples.
2022-12-09T16:12:51.860Z
2022-12-01T00:00:00.000
{ "year": 2022, "sha1": "e0199c37e90f13033b4e65abc5343de4b0de7ef0", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/23/23/15432/pdf?version=1670328685", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fb4406a63574199f55c7e1fbed03d475659fb183", "s2fieldsofstudy": [ "Agricultural And Food Sciences", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
248666633
pes2o/s2orc
v3-fos-license
Cost-effectiveness of Same-day Discharge Surgery for Primary Total Hip Arthroplasty: A Pragmatic Randomized Controlled Study Background Total hip arthroplasty (THA) causes a great medical burden globally, and the same-day discharge (SDD) method has previously been considered to be cost saving. However, a standard cost-effectiveness analysis (CEA) in a randomized controlled trial (RCT) is needed to evaluated the benefits of SDD when performing THA from the perspective of both economic and clinical outcomes. Methods Eighty-four participants undergoing primary THA were randomized to either the SDD group or the inpatient group. Outcomes were assessed by an independent orthopedist who was not in the surgical team, using the Oxford Hip Score (OHS), EuroQol 5D (EQ-5D), SF-36 scores and the quality-adjusted life years (QALYs). All the cost information was also collected. Results The mean stay of patients in the SDD group was 21.70 ± 3.45 h, while the inpatient group was 78.15 ± 26.36 h. This trial did not detect any significant differences in OHS and QALYs. The total cost in the SDD group was significantly lower than that in the inpatient group (¥69,771.27 ± 6,608.00 vs. ¥80,666.17 ± 8,421.96, p < 0.001). From the perspective of total cost, when measuring OHS, the incremental effect was −0.12 and the incremental cost was –¥10,894.90. The mean incremental cost-effectiveness ratio (ICER) was 90,790.83. When measuring QALYs, the incremental effect was 0.02, and the ICER was negative. Sensitivity analysis produced similar results. Conclusions SDD has an acceptable likelihood of being more cost-effective than the traditional inpatient option. After conducting cost–utility analysis, SDD resulted in better QALYs, while significantly reducing the total cost. INTRODUCTION Total hip arthroplasty (THA) is the first-choice treatment for many hip joint diseases. There were 378,089 THAs in the US in 2015 (1), with a cost of over 22,000 US dollars per procedure (2). In China, there were around 900,000 THAs in 2019. Thus, THAs result in substantial medical costs and are placing pressure on national budgets in both developed and developing countries. To reduce the cost, US Medicare launched the mandatory Comprehensive Care for Joint Replacement bundled payment model, which resulted in substantial hospital savings and reduced Medicare payments (3,4). China adopted some strategies such as the diagnosis-related group (DRG) payment system to reduce the cost (5). Moreover, the economic environment greatly influences the number of joint arthroplasty procedures (1) and extreme poverty rises for the first time since 1998 due to the spread of COVID-19. In short, it is vital to reduce the medical cost both for the government and for individuals under these conditions. Length of stay (LOS) in hospital is a crucial determinant of medical cost, and minimizing LOS could result in significant cost savings for arthroplasty (6). In the past 10 years, joint replacement has been performed on strictly selected patients on a same-day discharge (SDD) basis in the US and Europe, and studies showed similar or even better outcomes compared to inpatient operation, resulting in cost savings of up to 30% per case (7,8). Based on this, the US government removed arthroplasty from the inpatient operation list in Jan. 2018, in an attempt to move toward outpatient surgeries. However, data or polices from a high-income country cannot be extrapolated to the whole world, especially low-income countries. Moreover, studies have shown conflicting results regarding surgery effects, complications and adverse events when comparing between classical inpatient surgery and SDD for arthroplasty (9)(10)(11). Therefore, a standard and comprehensive cost-effectiveness study is needed, to help surgeons decide between outpatient or inpatient THA, balancing outcomes and costs (12). Recently, a computer-based retrospective study revealed that outpatient THA was cheaper but less effective in terms of total utility, and more cost-effective than inpatient THA within a specific willing-to-pay (WTP) threshold (13). However, to the best of our knowledge, no randomized controlled trial has been performed to analyze the costeffectiveness of primary THA on a SDD basis. In addition, no systematic evaluation of SDD THA has ever been reported in China. In this study, we evaluated the cost-effectiveness of SDD compared with that of regular care for patients who needed primary THA by analyzing the effect using the Oxford hip score (OHS), medical costs (both out-of-pocket and reimbursed), mean incremental cost-effectiveness ratio (ICER), and qualityadjusted life years (QALYs) at 6-month follow up. By a standard cost-effectiveness analysis (CEA), we hope to help physicians and government to look at SDD in a more accurate way. Study Design and Participants A prospective RCT was conducted between Oct 2017 and Aug 2019. Prior to the start of this study, the study protocol was approved by Wuhan Union Hospital Ethical Committee (0086-01). All participants provided written informed consent. Patients qualified for inclusion in the study if they met the following criteria: undergoing unilateral primary THA; having the ability to understand the relevant treatment process; aged between 18 and 75 years; a body mass index (BMI) ≤ 40 kg/m 2 ; hemoglobin ≥ 12 g/dL; American Society of Anesthesiologists (ASA) physical status classification of I or II; and no ongoing infection or blood coagulation disorders. Those with a history of coronary artery disease, chronic obstructive pulmonary disease, arrhythmias, or untreated obstructive sleep apnea were excluded. Eligible individuals were randomly assigned (1:1) to an inpatient THA group or an SDD THA group. SDD-THA was defined as admission, surgery, and discharge within 24 h, whereas the inpatients stayed in hospital for more than 1 day. Treatment Preoperatively, patients undergoing SDD-THA received information in the form of a teaching class conducted by a bedside clinician, which included the protocol, matters needing attention, exercise training, and home-based rehabilitation. All operations were performed by the same surgical team through a posterolateral approach. Celebrex 400 mg orally was used as routine analgesia before surgery. Cefazolin (1.0 g) and Tranexamic Acid (0.4 g) were administered 30 min prior to skin incision. A uniform perioperative multimodal pain management protocol was established by cocktail periarticular injection before wound closure, which consisted of Flurbiprofen axetil (50 mg) and Ropivacaine (200 mg). To avoid venous thromboembolism (VTE), all participants were encouraged to perform ankle pumping and quadriceps-setting exercises immediately. To relieve the pain, a multimodal postoperative pain management protocol was used, with all patients being given 200 mg Celebrex orally every 12 h and 5 mg/325 mg Hydrocodone/acetaminophen orally every 6 h. Patients undergoing SDD-THA received an additional IV dosage of antibiotics before discharge and 4 doses of 750 mg Cefaclor orally every 12 h after discharge. Regardless of group assignment, all participants met the same criteria before discharge: general wellbeing, a dry wound, capable of independent transfers, and able to walk one hundred feet. On the first day after discharge, a visiting nurse called each patient to confirm they were doing well. All patients were followed up for 6 months, whether it was by phone, WeChat, or other means. Outcomes The primary outcome was the OHS with 12 questions reflecting the different aspects of hip function. Each question was scored from 0 to 4, with 4 representing the best outcome or least symptoms (14). The secondary outcome was QALYs which was calculated as the outcome in cost utility analysis, as proposed by the CHEERS and CEA guidelines. QALYs is a parameter used to evaluate quality of life (QOL) and life year gain.The values of QOL were estimated by EQ-5D (15). The score for the EQ-5D ranges from −0.59 to 1.00, with a higher score indicating a better quality of life. QALYs health benefits were estimated by calculating the area under the curve (AUC) of the EQ-5D for linear interpolation over the whole 6-month period, and was calculated as: QALYs = EQ-5D × 0.5. Total Costs In cost-effectiveness analysis, the total costs includes direct costs and indirect costs. Direct costs are all costs that are attributable to health care intervention, which can be divided into direct medical costs and direct non-medical costs. Direct medical costs are all treatment costs incurred during hospitalization, while direct non-medical costs are payments other those for the medical institute, e.g., transportation costs and nutritional costs. Indirect costs represent economic losses resulting from hospitalization due to disease, e.g., the loss of working opportunity. The subjects of this study were hospitalized patients, whose costs were mainly reflected in direct medical costs, while direct nonmedical costs and indirect costs were negligible, which had little impact on the research results. In addition, the direct non-medical costs and indirect costs vary among individuals and gathering this information by questionnaire was usually challenging in real world scenario. Therefore, the total costs of this study is the direct medical costs, including operating room (OR) supply costs, surgical facility costs, hospital room costs, examination costs, laboratory costs, medication costs, and therapy costs. The total costs also could be calculated by summing up reimbursed costs and out of pocket costs. Reimbursed costs refer to the cost reimbursed by the national medical insurance and commercial insurance, and out of pocket costs refer to the costs that patients need to pay by themselves after reimbursement. Out-of-Pocket Costs The out-of-pocket and reimbursed charges added up to the total costs. From the patients' standpoint, the out-of-pocket costs meant the final total costs, which were considered to be the patients' primary concern. Statistical Analysis The study did not have enough power to statistically test for differences in health economy outcomes. As a result, we adopted a probabilistic approach to healthy economic inference, with the aim of informing decision makers about probability rather than statistical significance. This study was also an exploratory study, based on this fact, we did not set any hypothesis and the sample size would be pre-determined by clinical experience. All analyses were conducted according to the intention-to-treat (ITT) principle. ITT consists of keeping all patients included, in their initial group in the case of randomization, to perform final analysis of a study. ITT approach was adopted in this trial in which patients were analyzed as randomized regardless of the treatment actually received. In the CEA, the resulting estimates were incremental costeffectiveness or cost-utility ratios (ICER/ICUR). Incremental cost over 6 months was divided by incremental effect (treatment response or QALYs, respectively): ICER/ICUR = (Cost SDD -Cost inpatient )/(Effects SDD -Effect inpatient ). Effects were calculated separately utilizing OHS and QALYs, and the ICER/ICUR was based on the incremental costs per unit effect (OHS and QALYs) gained. The sampling uncertainty of the ICER/ICUR estimate was calculated by using a non-parametric bootstrapping method in each of the 5,000 iterations. These estimates were visualized using the cost-effectiveness plane and the cost-effectiveness acceptability curve. The cost-effectiveness plane showed four quadrants of the uncertainty of cost and effect, namely the southeast (intervention costs lower than the control group, and the effect is better), northeast (intervention costs higher than control group, but effect is better), southwest (intervention costs lower than the control group, but the effect is worse) and northwest (intervention costs higher than the control group, but the effect is worse). Cost-effectiveness acceptability curves were plotted to show the likelihood that interventions would be cost-effective according to different WTP thresholds. All statistical analyses were performing using R software (Version 3.6.1; 2019 The R Foundation for Statistical Computing). Sensitivity Analysis A sensitivity analysis was conducted to evaluate the robustness of the findings. Sometimes, the EQ-5D could be replaced by the SF-6D in measuring QALYs. Some studies used these two items together when estimating QALYs of patients after joint replacements (16). Since the direct use of the SF-6D questionnaire is not recommended, we used the Japanese SF-36 (version 2) to calculate QALYs, which are reported to be more sensitive to QALY variations when treating patients with osteoarthritis in China (17). Patients Characteristics Eighty-four participants were recruited from 186 primary THAs at our institution. The remaining 102 patients did not meet the eligibility criteria or decline to participate (Figure 1). Four patients of the SDD group failed to be discharged within the first day, giving a failure rate of 9.52%. The failures did not affect the results of the descriptive statistics according to any socio-demographic factor or any health status factor which was reported at the baseline. Based on the intent-to-treat principle, we still took them into account. All the participants completed a 6-month follow-up, and 84 (100%) of them completed the questionnaires. The characteristics of 84 participants are shown in Table 1. Effects The mean OHS at the 6-month follow-up was 38. Costs At baseline, the mean total cost was 69,771.27 ± 6,608.00 in the SDD group and 80,666.17 ± 8,421.96 in the inpatient group, and the incremental cost was −10,894.90, indicating that the total cost for the SDD group was lower than for the inpatient group and the result was statistically significant (p < 0.001). Table 2 lists all the cost details and compares them by categories. In other categories of charge, apart from OR supplies and the surgical facility fee, all the others indicated the same result as the total cost. This can probably be explained by the shorter stay in hospital resulting in a lower cost. Figure 2 more intuitively demonstrates the differences in total treatment costs between the two groups as well as the differences in costs between each category. Table 3 shows the incremental cost, incremental effect and mean ICER based on a calculation of 5,000 bootstrap simulations for both cost-utility analysis and sensitivity analysis. According to OHS, the SDD group achieved a lower mean total cost (-10,899.38) compared with the inpatient group, while it resulted in a decline in mean effect (−0.14). In the related costeffectiveness plane, after processing 5,000 bootstrap simulations, as shown in Figure 3A, almost half of the bootstrapped ICERs were mapped in the south west quadrant which indicated that SDD saved money, but may reduce the treatment effect. Meanwhile, the remaining half of the bootstrapped ICERs fell into the south east quadrant, which indicated that SDD not only saved money, but also obtained a better effect. Based on this calculation, an acceptability curve was generated as shown in Figure 3B. At a WTP ceiling value of 0, the probability that SDD being cost effective was 100%, and this trend ketp on when the WTP was around 800,000. However, when WTP went on increasing, the probability would decrease accordingly. For example, when WTP was increased to 2,000,000, the probability that SDD could be regarded as more cost-effective than inpatient treatment was reduced to approximately 65%. When calculating the mean ICER of cost-utility using QALYs, we obtained a negative result. The cost-effectiveness plane revealed the ICER distribution based on QALYs, as shown in Figure 4A. Most of the ICERs were mapped to the south east quadrant, which indicated that SDD produced a better outcome and lower cost. Figure 4B also showed the acceptability curve based on calculation of QALYs. As WTP was between 0 and 800,000, the probability of SDD being cost-effective remained extremely closed to 100%. Similar to Figure 3B, when WTP went on increasing, the probability of SDD being cost-effective would decrease accordingly as well. replicates of the ICER (mean differences in total cost in OHS) on the cost-effectiveness plane. The circles in northwest quadrants represent trials in which SDD-THA costs lower than the inpatient THA, but the effect is worse. The circles in northwest quadrants represent that SDD-THA was less costly and more effective than the inpatient THA. (B) Cost-effectiveness acceptability curve showing the probability of the SDD procedure being cost-effective at varying WTP ceilings (based on 5,000 replicates of the ICER using mean differences in total cost). FIGURE 4 | (A) Scatterplot of 5,000 replicates of the ICER (mean differences in total cost in QALYs) on the cost-effectiveness plane. The circles in northwest quadrants represent trials in which SDD-THA costs lower than the inpatient THA, but the effect is worse. The circles in northwest quadrants represent that SDD-THA was less costly and more effective than the inpatient THA. (B) Cost-effectiveness acceptability curve showing the probability of the SDD procedure being cost-effective at varying WTP ceilings (based on 5,000 replicates of the ICER using mean differences in total cost and QALYs). Sensitivity Analyses Using the SF-6D as an alternative measurement of QALYs produced a similar result as using EQ-5D. At the 6-month follow up, the incremental effect was 0.03 (95%: 0.01-0.05). When calculating the mean ICER, we obtained a negative result which meant that the SDD group dominated the inpatient group. Up to 90% of the ICERs were mapping in the south east quadrant. We also obtained another acceptability curve based on calculation of QALYs using SF-6D which was similar to Figure 4B. This indicated that as WTP increased, the probability of SDD being cost-effective remained extremely close to 100% and the results showed no differences with our findings using EQ-5D. DISCUSSION Our study was designed to estimate the cost-effectiveness and cost utility of same-day THA compared with traditional THA surgery. The results showed that SDD exhibited similar effects and lower cost compared with the traditional inpatient group. In addition, cost-effective analysis demonstrated that effectiveness varied at different WTP thresholds. To the best of our knowledge, this is the first CEA analysis on same-day THA, and the first same-day THA study carried out in China. SDD has been performed for almost 20 years in the US and Europe, and a systematic review which included 1,009 patients undergoing SDD found no significant difference between inpatient and SDD groups regarding readmissions and complications (18). However, age has some influence on the outcome of the two operations. Berger et al. included patients aged from 50 to 80 years, and found more complications in the SDD group than the inpatient group (10), while another study which only included patients aged between 42 and 64 years (mean age 55) found no significant difference regarding the outcome or complications (19), which is in line with our data (mean age 53 years). This is possibly because health-related quality of life measured by EQ-5D has a negative correlation with age (20). Therefore, SDD is probably more feasible for younger patients which is almost 10 years younger than the general age of THA patients (around 63 years), so patients of varied age are needed to be included in further studies to draw definite conclusions. SDD has been reported to reduce LOS and is a useful method to reduce the cost. Molloy et al. found that shortening the LOS could lead to a 17.6% decrease in the cost of THA by reducing the LOS from 4.06 to 2.97 days (2). Bertin et al. found that charges of SDD group were $2,465 less (p = 0.02) than that of inpatient THAs, demonstrating a 10.68% drop (21), while other studies showed a 30-50% decrement, which may be due to the 71.27% reduction of surgical facility fees (7,22). In our case, we reduced the LOS from 78.15 h for inpatients to 21.70 h in SDD patients, and found 13.51% reduced expenditure ( 10,894.90) in the SDD group. Moreover, the most obvious reduction was the medication fee (4.82 vs. 12.82%) in our study, while there were no differences in OR supplies, which is reasonable and consistent with previous studies. In our study, we evaluated the out-of-pocket and reimbursement portions separately. First, we showed that out-of-pocket expenses were decreased by 4,439.00 RMB, accounting for 7% of the 2019 GDP per capita in China. Interestingly, we found that the reimbursement portion decreased more significantly in the SDD group compared with the out-of-pocket portion (6,455.90 RMB vs. 4,439.00 RMB). This may be because the medication fee dropped most significantly, which are mostly on the reimbursement list. Considering that 900,000 procedures of hip arthroplasty were performed in China in 2019 and a projection of 572,000 in the US in 2030 (23), day surgery could bring great savings for the government. Therefore, our study provides a rationale to perform THA on an SDD basis especially under the current circumstances when we are undergoing a COVID-19 outbreak and thus an economic crisis. There is an ever-increasing impetus to reduce the medical burden for every government, regarding medical resources. In developing countries such as China, the medical resources are especially limited: the number of hospital beds in China is 28.49 ± 17.10/100,000 people (24), and this is far less than that of developed countries. SDD surgery maximizes bed utilization, which could make fuller use of the medical resources, and specifically benefit developing countries. We also found that different WTP thresholds affected the ICER. Assuming a WTP of 2,000,000, the likelihood of SDD being cost-effective was approximately 65% for gaining one unit of OHS. If the likelihood of SDD being cost-effective is higher than 90%, a WTP of approximate 1,200,000 or lower for gaining one unit of OHS would make the SDD option cost-effective. If the WTP increased, the probability of SDD being cost-effective would decrease. Meanwhile the result of OHS gains were not mirrored in EQ-5D QALYs gains. We observed that the SDD group dominated the inpatient group when calculating cost-utility, which meant that the SDD option was not only cost-saving, but also presented better result in estimating patients' quality of life. The sensitivity analysis showed similar result. This study has several limitations. First, we mainly compared the direct medical costs of the surgery, although direct nonmedical costs and indirect costs were negligible, which had little impact on the research results. Second, the followup time was only 6 months, which is relatively short, and thus we have no data regarding any differences in longterm complications and revision rate between two groups. Third, this is a single center study in one country, and as health policies differ greatly between regions and countries around the world, the data would probably be different. Therefore, a multicenter long-term follow-up study is needed in the future. CONCLUSION We found a reduced cost but similar surgical effects of THA, and no complications, when performed as SDD surgery compared to regular inpatient procedures in a short-term followup pragmatic RCT study. Moreover, the probability of this option being cost effective varied depending on the willing-topay threshold. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Wuhan Union Hospital. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS HT and WT was responsible for conception and design of the study. JJ contributed to data collection. YS and PZ drafted the manuscript. WC and KZ contributed to manuscript preparation and data analysis. ZS and SY contributed to revision of the manuscript. All authors contributed to either the conception, design, data collection, analysis, and read and approved the final manuscript. FUNDING This study was supported by the National Natural Science Foundation of China (Nos. 82072509 and 81702157).
2022-04-29T15:50:22.078Z
2022-04-25T00:00:00.000
{ "year": 2022, "sha1": "646ccac7599e1cf939fe1fc4e94026e2c08bbc64", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpubh.2022.825727/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7b6433fe31ab202727c3f7504277897deda0b9b3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248723961
pes2o/s2orc
v3-fos-license
The accuracy of machine learning approaches using non-image data for the prediction of COVID-19: A meta-analysis Objective COVID-19 is a novel, severely contagious disease with enormous negative impact on humanity as well as the world economy. An expeditious, feasible tool for detecting COVID-19 remains yet elusive. Recently, there has been a surge of interest in applying machine learning techniques to predict COVID-19 using non-image data. We have therefore undertaken a meta-analysis to quantify the diagnostic performance of machine learning models facilitating the prediction of COVID-19. Materials and methods A comprehensive electronic database search for the period between January 1st, 2021 and December 3rd, 2021 was undertaken in order to identify eligible studies relevant to this meta-analysis. Summary sensitivity, specificity, and the area under receiver operating characteristic curves were used to assess potential diagnostic accuracy. Risk of bias was assessed by means of a revised Quality Assessment of Diagnostic Studies. Results A total of 30 studies, including 34 models, met all of the inclusion criteria. Summary sensitivity, specificity, and area under receiver operating characteristic curves were 0.86, 0.86, and 0.91, respectively. The purpose of machine learning models, class imbalance, and feature selection are significant covariates useful in explaining the between-study heterogeneity, in terms of both sensitivity and specificity. Conclusions Our study findings show that non-image data can be used to predict COVID-19 with an acceptable performance. Further, class imbalance and feature selection are suggested to be incorporated whenever building models for the prediction of COVID-19, thus improving further diagnostic performance. Introduction Coronavirus Disease 19 , caused by severe acute respiratory syndrome Coronavirus 2 (SARS-CoV-2) [1], has posed tremendous challenge due to the pandemic declared after March 2020 [2]. As of December 23, 2021, more than 27 million cases have been confirmed, including over 5 million deaths [3]. Currently, widely accepted management strategies for minimizing the spread of COVID-19 include forced lockdowns, travel restrictions, quarantines, social distancing, isolation, and infection-control measures [4]. As for those individuals who were infected, supportive care is the primary treatment available since specific effective and curative therapeutics remain elusive [4]. Commonly, COVID-19 adverse prognosis includes hospitalization, transfer to intensive care units, or even mortality [5,6]. Further, advanced COVID-19 is combined with heterogeneous clinical features [5]. A large number of those infected remain asymptomatic due to the nature of COVID-19 symptomatology [7]. Efficient diagnosis of COVID-19 is difficult to achieve. The lack of optimal sensitivity and specificity in clinical detection methods has been shown to be a significant reason behind the rapid spread of COVID-19 [8]. The use of real-time, reverse transcription polymerase chain reaction (rRT-PCR) is presently the diagnostic gold standard used to confirm COVID-19 infection [1]. Materials required for this assay however are reportedly in short supply, leading to possible delays in diagnostic results throughout the pandemic period [9]. In view of such complex circumstances, a rapid and early diagnostic tool, or a ready system efficacious to identify the infected individuals, thus plays a vital role in managing the spreading COVID-19 pandemic. Based on the notion that machine learning techniques have been emerging as a potential tool for healthcare professionals to accelerate their process of decision-making and to improve diagnostic accuracy. A meta-analysis to investigate the potential of using machine learning to identify COVID-19 is thus not only essential but also quite timely. Until recently, a systematic understanding of how machine learning techniques using non-imagery-based data can be used to predict COVID-19 is still lacking, not to mention a thorough meta-analysis of extant diagnostic accuracy. To fill this research gap, the objectives of this study are, as follows: 1) to meta-analyze the accuracy of the diagnosis of and prognosis of COVID-19 based on non-image data via machine learning techniques; and, 2) to compare and to contrast the diagnostic accuracy of plausible covariates that can account for the heterogeneity found among selected studies. Hence, two research questions (RQ) were proposed: 1) RQ1: What is the diagnostic accuracy of machine learning models, based on non-image data, for the diagnosis and prognosis of COVID-19 patients?; and, 2) RQ2: What covariates may contribute to the heterogeneity between these selected studies? The remainder of the article will be organized accordingly. The "Material and methods" section describes the research method used in this study. The "Results" section presents the analytical results, the "Discussion" section discusses the significance of the findings, and the "Conclusions" section summarizes the findings of the current study. Material and methods This section describes the search strategy and selection process for the literature used to make up this study, as well as the method of extraction for required information taken from that literature. In this section, we also describe the tools for quality assessment and statistical techniques employed. Search strategy and selection process A comprehensive search of electronic databases was made. It included PubMed, ScienceDirect, and SpringerLink, and it was carried out between 1st January 2020 and 3rd December 2021 using the keyword combinations of COVID-19, machine learning, deep learning, and artificial intelligence. We did not search other databases such as Scopus or Web of Science for two principal considerations: 1) PubMed primarily focuses on medicine and biomedical sciences, which is more specific to this study, while Scopus and Web of Science cover multidisciplinary fields [43]. 2) PubMed is free and easier to use than Scopus and Web of Science [43]. In addition, we used Google Scholar as a supplementary source to search articles. Despite Google Scholar not being suggested as stand-alone for purposes of a literature search [44], Google Scholar possesses sufficient stability in terms of article coverage to be used compared with either Scopus or Web of Science [45]. Detailed search queries for each database are shown in Table 1. Studies to be considered relevant were expected to meet the following criteria: 1) studies must have investigated the predictive accuracy of COVID-19; 2) studies should have leveraged structured data as features; 3) studies should have used artificial intelligence to predict the spread of the COVID-19 disease; 4) studies should have provided sufficient outcomes of predictive models; and, 5) studies taken from the literature must have been written in English and peer-reviewed. Studies meeting the following criteria were excluded: 1) studies using images (CT or CXR), image-associated reports, or unstructured data used as predictive features; 2) studies irrelevant to our research goal; and, 3) studies where full texts were unavailable for purposes of examination. Based on the stated inclusion and exclusion criteria, the data were first assessed by the first author (K.M.) and cross-checked by the third author (C.S.). Any discrepancies were resolved between both authors through a consensus discussion to ensure database accuracy/consistency. Finally, we located 30 studies including 34 models that predicted COVID-19 (see Fig. 1). Among the 30 studies, 17 studies including 20 models were extracted from PubMed, ScienceDirect, and SpringerLink, and 13 studies [6,14,17,19,21,[26][27][28][29][46][47][48][49] including 14 models were identified by Google Scholar. Furthermore, three studies [21,30,32] included more than one model. Data extraction From each of the included studies, the following information was extracted: purposes of the predictive models, types of prognostic predictive models, types of features for establishing predictive models, geographic areas of the samples used to build predictive models, machine learning techniques adopted to build the predictive models, whether class imbalance issues were handled substantively, and whether extra feature selection strategies were adopted. We extracted or calculated the original true/false positives and true/false negatives from each study to derive summary outcome measures. Methodological analysis Diagnostic accuracy studies are often at the risk of being biased since they originate from differences in methodology, sample recruitment, or data collection [50]. We therefore assessed the quality of studies according to the revised Quality of Diagnostic Studies (QUADAS-2) guidelines, including four domains: sample selection, index test, reference standard, flow, and timing [51]. Statistical analysis We meta-analyzed the diagnostic accuracy by using lme4 [52], mada [53], and meta [54] packages for R statistics. Sensitivity and specificity were pooled in accordance with a bivariate model [55]. Area under receiver operating characteristic curve (AUROC), diagnostic odds ratio (DOR), positive likelihood ratio (LR+), and negative likelihood ratio (LR-) were also estimated for purposes of this study. Forest plots were created to show heterogeneity among the models up for further consideration. Moreover, a summary receiver operating characteristic curve with 95% confidence interval (CI) and 95% prediction interval (PI) were employed to assess the existence of a threshold effect among the models. Title, abstract or author-specified keywords: COVID-19 AND ((machine learning) OR (deep learning) OR (artificial intelligence)) SpringerLink "COVID-19 ′′ AND ("machine learning" OR "deep learning" OR "artificial intelligence") Google Scholar COVID-19 machine learning deep learning artificial intelligence According to prior suggestions about possible sources of heterogeneity between the selected studies [50], meta-regression was undertaken with three plausible types of covariates: 1) model purpose related covariates: including purpose of predictive models, and types of prognosis predicted; 2) sample related covariates: including feature type (e. g., demographic data, vital signs, laboratory data, medical history), and geographic areas of the patients; and, 3) machine learning related covariates: including types of artificial intelligence adopted, strategies for class imbalance, and feature selection strategies. Based on the due diligence performed, the Institutional Review Board of E-Da Hospital approved the study protocol (EMRP-109-158). Results In this section, we report the characteristics of included studies, as well as the results of the quality assessment made. Subsequently, we report the summary diagnostic accuracy of included studies and potential covariates used for explaining between-study heterogeneity. General study characteristics Among the 34 predictive models examined in this study, 14 models (41.18%) aim to diagnose the prevalent COVID-19 disease while 20 models (58.82%) aim to predict the prognosis of the COVID-19 patients (see Table 2). In the study parameters, numbers 7, 12, and 1 models serve to predict whether patient status ends-up in critical care, mortality, or hospitalization, respectively. Twenty-four models used only laboratory data or combined laboratory data with other clinical data (such as demographic information, symptoms, vital signs, or history) to predict the COVID-19 disease, while 10 models used only clinical data to predict COVID-19 disease without inclusion of any laboratory data. Most samples belong to a Western (75.76%) context, such as American or European. Thirty-two models (94.12%) were based Note: One study may be designed to predict more than one COVID-19 disease. Quality assessment We assessed the quality of the selected studies based on QUADAS-2 [51] (see Fig. 2). For identifying bias, the 10 and 24 models were classified as having some concerns (29.41%) and low-risk of bias (70.59%) related to patient selecting domain, respectively. All 34 models were considered as low-risk of bias regarding index test and reference standard domains. Further, 13 and 21 models were regarded as generating some concerns (38.24%) and low-risk of bias (61.76%) about their flow and timing domains, respectively. As for the applicability judgment, 11 and 23 models were considered to be of some concern (32.35%) and low concern of applicability (67.65%), respectively. Finally, all 34 models were considered to be of low concern of applicability regarding the index test and reference standard domains. RQ1: Diagnostic accuracy of non-image predictive models based on machine learning The effect size pooled by traditional univariate meta-analysis can sometimes be misleading [59]. We therefore pooled the effect sizes based on the bivariate model [55]. As shown in Table 3, the overall pooled area under receiver operating characteristic curve for machine learning to predict the COVID-19 disease is about 0.91. Moreover, pooled sensitivity, specificity, diagnostic odds ratio, positive likelihood ratio, and negative likelihood ratio were 0.86, 0.86, 37.93, 6.20, and 0.16 respectively (see Table 3). Fig. 3 and Fig. 4 show the forest plot of sensitivity/specificity and the summary receiver operating characteristic curves with 95% confidence interval and prediction interval for the 34 predictive models, respectively. Two χ 2 tests were conducted to test for equality of sensitivity and of specificity, and these showed significant results, χ 2 (33) = 1090.94, p < 0.001 and χ 2 (33) = 113615.20, p < 0.001, indicating significant between-study heterogeneity existed in terms of both sensitivity and specificity. RQ2: Plausible covariates explaining between-study heterogeneity Due to the significant between-study heterogeneity for both sensitivity and specificity, we also conducted sub-group analysis by means of meta-regression to further identify potential covariates that might influence the performance of the COVID-19 disease predictive models. As shown in Table 4 and Fig. 5(a), the sensitivity was significantly (p = 0.002) higher for the 14 models designed to diagnose COVID-19 (0.92; 95% CI, 0.88-0.95) than for the other 20 models for predicting the prognosis of COVID-19 (0.79; 95% CI, 0.71-0.86). The corresponding specificity of the 14 models for COVID-19 diagnosis (0.80; 95% CI, 0.67-0.89) was albeit lower than those of the 20 models for COVID-19 prognosis (0.89; 95% CI, 0.82-0.94), but didn't reach statistical significance (p = 0.144). If we go deeper into the models for COVID-19 prognosis, the sensitivity of the 7 models for predicting critical care (i. e., patients getting transferred to intensive care units or using ventilation apparatus) (0.73; 95% CI, 0.49-0.88) due to COVID-19 was lower than for the 12 models for predicting mortality (0.81; 95% CI, 0.73-0.87) due to COVID-19, but didn't reach statistical significance (p = 0.255), as shown in Table 4 and Fig. 5 (b). The corresponding specificity was proximate between the 7 models for critical care and the 12 models for mortality (0.88 vs. 0.90, p = 0.689). There is only one model for predicting hospitalization due to COVID-19, as such we did not include that model into the sub-group analysis. The 12 models that adopted extra strategies, as depicted in Table 4 and Fig. 5 (f), to deal with class imbalance had a lower sensitivity (p = 0.001) than models without extra strategies for handling class imbalance (0.74; 95% CI, 0.60-0.84 vs. 0.90; 95% CI, 0.87-0.93). The specificity of the models that adopted extra strategies to deal with class imbalance was however higher than the remaining models without extra strategies for handling class imbalance (0.92; 95% CI, 0.83-0.96 vs. 0.82; 95% CI, 0.72-0.89), but no statistical difference was established (p = 0.076). Discussion The global COVID-19 pandemic is a growing public health concern requiring unprecedented efforts in nearly every field of endeavor. Effective coping strategies for this disease are however still under development or of nascent consideration. Machine learning has the potential to play a key role in this fight against the COVID-19 pandemic. However, there has been a lack of meta-analysis studies that are focused on the diagnostic accuracy of COVID-19 casework based on non-image data. Based on such an understanding, our study investigated the performance of machine learning approaches based on non-image data for predicting COVID-19 incidence, undertaking a bivariate meta-analysis. Note: CI denotes confidence interval. The results demonstrate strong diagnostic performance with a pooled sensitivity of 0.86, a pooled specificity of 0.86, and an AUC of 0.91, respectively. Prior meta-analysis [42] shown the pooled sensitivity and specificity of artificial intelligence for CT scan was 0.90 and 0.91, respectively which is higher than those of artificial intelligence based on non-image data. Nonetheless, non-image data are often far more obtainable than image data among those hospitals with limited material resources. The purpose of predictive models, type of prognosis, feature type, geographic area, type of AI techniques, whether class imbalance issues were dealt with, and where extra feature selection strategies were implemented, were further included in bivariate meta-regression. This was done to account for potential instances of heterogeneity among the primary studies made. The findings demonstrated that sensitivity was significantly dependent on the purpose of predictive models and upon whether class imbalance issues were handled. It may be noted that specificity was significantly dependent on whether the extra-strategies technique was used to select features before training predictive models. In terms of predictive models purposefulness, the sensitivity of diagnostic models is significantly higher than that of models for prognosis (0.92 vs. 0.79) while specificity of models for diagnosis is lower than models for prognosis (0.80 vs. 0.90). There was however no significant difference between the specificity of models for diagnosis and prognosis. It may be reasoned that models for diagnosis had a higher sensitivity may be because infection by COVID-19 can be confirmed by a comparison of the rRT-PCR testing results [60]. The prognosis of the COVID-19 disease however is more complicated since it relates to various factors such as age, gender, obesity, comorbidities, or time of anti-viral treatment [61,62]. For example, there is prior evidence [62] that COVID-19 patients experiencing diabetes and hypertension or some other comorbidities such as cardiovascular disease, chronic obstructive pulmonary disease, and cancer are more likely to have adverse outcomes. Further, the time span between onset of severe outcomes and anti-viral treatment application for COVID-19 are major factors influencing the prognosis [62]. We further analyzed the 19 models for predicting two types of prognoses: critical care and mortality. The pooled sensitivity and specificity is 0.73 and 0.88 when the models are used to predict critical care regimens for COVID-19 patients. These two figures are lower than those of models used for the prediction of patient risk-of-mortality, but no statistically significant difference was confirmed. The plausible reason that mortality predictive models reached a higher sensitivity and specificity than did critical care models may be due in large part to the fact that the situations of patients close to mortality are simpler to explain than those of critical care patients. This situation includes various patient situations, such as being transferred to intensive care units or using ventilation apparatus, that are readily apparent. Especially during the apex of the epidemic, the number of patients exceeded the service capacity of most primary healthcare facilities; as such, the criteria for defining critically ill patients would thus be different from the more stable pandemic contagion periods. Furthermore, the variant of the SARS-CoV-2 virus continues to mutate [63] making it difficult to estimate its exact impact on current patient-loads. Available evidence [64] suggests that the incidence of SARS-CoV-2 should be closely monitored since patients from different locations have already shown different mutated COVID-19 sequences. In addition to rRT-PCR test, laboratory data (e.g., transaminases, lymphocytes, eosinophils, calcium, and separate aminotransferase) with/without other demographic and clinical data (e.g., symptoms, vital signs, or medical histories) predicted COVID-19 in 24 models, while the remaining models used only demographic/clinical data as features for similarly predicting COVID-19. Our meta-analysis showed that models that included laboratory data performed better than with models without laboratory data. This included heightened sensitivity (0.88 vs. 0.80). Models without any laboratory data included slightly outperformed models with laboratory data included in terms of specificity (0.87 vs. 0.86). Both sensitivity and specificity however did not reach any real statistical significance. Previous studies [65][66][67] found that laboratory data can provide useful information for COVID-19 diagnostics. For example, prior evidence [67] has found that the platelet count can dynamically reflect patho-physiological changes prevalent in COVID-19 patients. Other evidence [68] however found that some laboratory results involved with COVID-19 patients are different between pregnant women, children, and other members of a general population. Including and testing a wider variety of laboratory data may be required to achieve a more stable predictive platform when dealing with COVID-19 as a whole. Further, before the widely acknowledged rRT-PCR test [69] becomes universal, other data such as laboratory tests that have shown potential should be identified in order to effectively widen the prediction of COVID-19 incidence. It would be more helpful if machine learning models can incorporate routinely available laboratory tests to correctly predict COVID-19, which would streamline the diagnosis and treatment of COVID-19 patients, saving considerable time and decisionmaking. Since the first case of COVID-19 was reported in Wuhan, China and then the rest of the world, knowledge about COVID-19 has altered and expanded to a certain extent. Hence, the question remains of whether it is possible that the performance of predictive models based on eastern countries may be different from western-based predictive models through a variety of circumstances (i.e., transparency of research procedures, availability of data, reliability of findings, geo-political considerations). We therefore conducted a sub-group analysis based on samples from different geographic areas to make such a determination. The sub-group analysis showed that models using samples from the western contexts had a lower sensitivity (0.86 vs. 0.88) and specificity (0.83 vs. 0.93) than models using samples taken from eastern contexts. The plausible reason for this result is complex; so, we suspect it may be due to the algorithms adopted by these models. In the eastern category, eight models adopted only two major types of algorithms, including ensemble learning and deep learning. With appropriate configurations, these two types of machine learning models are generally considered to have a better predictive performance when compared with other algorithms [70,71]. On the other hand, the western group, consisting of 25 models, applied seven different types of algorithms in the mix. Such a variety of different algorithms may thus contribute to a higher variation status in the models' predictive performance, which may explain why the pooled sensitivity and collective specificity of the western group appeared lower than that of the eastern group. The pooled sensitivity and specificity are (0.85, 0.86) and (0.99, 0.86) respectively when machine learning and deep learning techniques were in use. Deep learning outperformed machine-learning in terms of sensitivity, but it tied with machine-learning in specificity. There was however no significant difference between these two techniques shown. Despite the sensitivity of deep learning being quite high (0.99) in our study, its 95% confidence interval however is also very wide (0.32-1), indicating the sample sizes were too small, which is just the case in our study (n = 2 for deep-learning). Still more deep learning studies are required to verify if its true performance in predicting the COVID-19 is based on non-image data. In regards to classification tasks, the receiver operator characteristic (ROC) plot and the AUROC delineate how an adaptable threshold causes changes in two types of errors: false-positives and false-negatives [72]. However, the ROC curve and AUROC are only partially informative whenever used with imbalanced data [72]. The explainability, traceability, and interpretability of performance measures will have greater future importance in dealing with imbalanced data. Hence, problems relevant to class imbalance are often dealt with by use of various strategies such as with a synthetic minority over-sampling technique [73]. Our study demonstrates that the pooled sensitivity for models without extra-strategies for class imbalance is significantly higher than that of models with extra-strategies for class imbalance (0.90 vs. 0.74). This may be compared to the pooled specificity for models with extra-strategies being higher than that of models without extra-strategies for class imbalance (0.92 vs. 0.82), but it did not show a statistically significant difference. Ramezankhani, Pournik, Shahrabi, Azizi, Hadaegh and Khalili [74] adopted an over-sampling strategy to deal with class imbalance problems for predicting type 2 diabetes. It was found that the original training dataset had a higher sensitivity, and a lower specificity, than a balanced-training dataset, therefore indicating such a strategy does not guarantee for better performance. Our study however showed a lower sensitivity and a higher specificity which may be due to the resampling strategies propounded by Ramezankhani, Pournik, Shahrabi, Azizi, Hadaegh and Khalili [74] and applied only in the training dataset. The performance data that our study collected was mainly adapted from the test dataset, and that may indicate class-imbalance handling strategies cannot necessarily guarantee the overall performance of test dataset. In order to enhance the performance of predictive models, preprocessing such as a feature-selection aspect can be adopted before training machine learning models [10]. Our study showed that 9 models with feature-selection had a higher sensitivity (0.88 vs. 0.85) and specificity (0.95 vs. 0.82) than did 25 models that were without featureselection, but only specificity reached any statistical significance. Based on the findings of our meta-analysis, the importance of feature-selection should not be overlooked during preliminary model-building processes. The findings based on our study identified some current gaps in the state-of-the-art and in future research challenges. First, despite more studies based on machine learning leveraging non-image-specific data for the prediction of COVID-19, the number of studies remain less than those studies utilizing image data. More studies are thus required to better investigate the potential of non-image data for predicting COVID-19. Second, the paucity of studies using deep learning techniques for non-image data is another plausible issue since deep learning is considered better-suited for use with image data than other machine learning techniques. Future studies can be used to leverage various deep learning techniques for predicting COVID-19 based on non-image data. Third, the diagnostic accuracy achieved by machine learning based on non-image data is still lower than that of diagnostic accuracy based on image data. Future studies may combine image and non-image data to establish a sufficient model that can achieve better diagnostic performance. Our findings may have important implications for the medical practice as well. First, hospitals that are short of material and staff resources can adopt these machine learning models based on non-image data routinely available to assist those who are identifying possible COVID-19 patients. By doing so, the contact risk of COVID-19 infection due to a lack of rRT-PCR or CT testing measures may be diminished. Second, developers of machine learning models can consider adopting strategies for feature-selection and class-imbalance features during model-building planning and formulation. By doing so, a predictive model with better performance, to support informed decision-making by healthcare professionals, may be established. Further, models based on machine learning techniques may be applied to predict other epidemics and/or diseases in future times. To achieve this purpose, specific features can be carefully selected based on the specific pandemic/disease, and then different types of machine learning algorithms can be compared. In this way, the best performed algorithm can be determined based on their demonstrated learning capabilities. Conclusions Our study aims to meta-analyze the accuracy of diagnostic tests of artificial intelligence techniques to confront the COVID-19 pandemic. By searching multiple electronic databases, 30 studies including 34 predictive models were included in this meta-analysis. A bivariate metaanalysis of diagnostic test accuracy was conducted to estimate sensitivity, specificity, and summary receiver operating characteristic curve. Strong diagnostic performances were obtained with the models used in this study. These findings may indicate that machine learning models that use non-image data can be implemented in hospital settings, especially in diminished-resource locations, in order to effectively predict the incidence or prevalence of COVID-19. These models show the potential of becoming more accurate and further representative as data sets increase in terms of their size. Furthermore, covariates including diagnosis purpose, whether class-imbalance issues are processed, and whether extra-feature selection strategies being adopted were found to partially explain some of the heterogeneity found among the primary studies evaluated. Summary points What was already known on the topic? • COVID-19 has had a serious impact on human lives and upon economic livelihoods; however, a quick and feasible tool for detecting COVID-19 incidence remains elusive. • Real-time reverse transcription polymerase chain reaction is currently the "gold standard" for diagnosing COVID-19, but it requires a longer turn-around time in terms of efficacy. • Pulmonary computed-tomography scan and chest radiography can be used to complement the practical diagnosis COVID-19. What this study added to our knowledge? • Strong diagnostic test accuracy of COVID-19 can be achieved by using non-image data. • Non-image data, taken as predictive features, can assist hospitals with limited financial and human resources to identify cases of COVID-19. • Class-imbalance and feature-selection strategies may be considered before building predictive models useful for diagnosing COVID-19.
2022-05-13T13:11:53.291Z
2022-05-01T00:00:00.000
{ "year": 2022, "sha1": "ce5c0973f25230f3cd848b532e7a0a2bae984217", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.ijmedinf.2022.104791", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "20a4b30aa0d1ff437e24c3b16f0f734cf26db223", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
235443394
pes2o/s2orc
v3-fos-license
Comprehensive Analysis of Regulatory Network for LINC00472 in Clear Cell Renal Cell Carcinoma Renal cell carcinoma (RCC) accounts for about 2% to 3% of adult malignancies, and clear cell renal cell carcinoma (ccRCC) is the most common and aggressive type of kidney cancer. It accounts for 75% of all kidney tumors. Although new targeted drugs continue to appear, they are still not suitable for all patients. ,erefore, an in-depth study of the molecular mechanism of the development of ccRCC and exploration of new targets for the treatment of ccRCC will help to achieve precise treatment for ccRCC. With the development of molecular research, the study of long noncoding RNA (LncRNA) has given us a new understanding of tumors. Although LncRNA does not encode proteins, it directly interacts with proteins in various signaling pathways and affects cell functions. ,erefore, it is of great significance to study the mechanism of LncRNA in ccRCC. ,e expression level of Linc00472 in ccRCC tissues is significantly lower than adjacent normal tissues, and its low expression is closely related to Furman’s high grade. ,e low expression of Linc00472 is associated with poor prognosis in patients with ccRCC. ,e results of protein interaction and functional enrichment analysis indicate that genes upregulated in renal clear cell carcinoma may play a major role. Analysis of target gene prediction results showed that Linc00472may be used as ceRNA in the miR-24-3p-HLADPB1 pathway, miR-24-3p-CXCL9 pathway, miR-221-3p-C3aR1-VEGFR2 pathway, miR-17-5p-HLA-DQA1/HLA-DQB1 pathway, and miR-17-5p-C3aR1/C5aR1-VEGFR2 pathway which play important functions. In addition, the regulatory relationship between miR-24-3p and TNFR2 (TNFRSF1B), CD36, and COL4A1 should also be noted. ,e value of Linc00472 in the diagnosis and treatment of ccRCC is worthy of further study. Introduction Renal cell carcinoma (RCC) accounts for about 2% to 3% of adult malignant tumors, and clear cell renal cell carcinoma (ccRCC) is the most common and aggressive type of renal carcinoma. It accounts for 75% of all kidney tumors [1,2]. In recent years, the incidence of renal cancer has been on the rise in China, which presents higher requirements for its prevention and treatment. With the development of molecular research, modern oncology is being further improved, and these studies may have a profound impact on the prevention and treatment of tumors. Especially, the research on long noncoding RNA (LncRNA) is improving our understanding of kidney cancer. However, there are still many factors that hinder the realization of this goal. In particular, the regulatory mechanism of gene expression is still unknown; it is a major challenge to construct a multi-molecule regulatory network, and targeted therapy is not always effective for patients. As an RNA transcript that is not translated into protein, LncRNA is more specific than messenger RNA (mRNA) in defining cell ontogeny and most protein-coding genes [3][4][5]. LncRNA affects cell functions through genome-wide transcriptional regulation and direct interaction with proteins in a variety of signaling pathways. In addition, the dysregulation of LncRNA expression in kidney cancer often leads to the promotion of a variety of carcinogenic mechanisms and the development of treatment resistance. erefore, it is very important to understand the role of LncRNA in kidney cancer, which will help strengthen the prevention and treatment of kidney cancer. Compared with the highly expressed LncRNA in kidney cancer, the research on the low expression of LncRNA is still less and not in-depth. erefore, we focused on studying the low expression of LncRNA in renal clear cells, and found LncRNA-Linc00472, which is worthy of further study. rough the analysis of data from multiple public databases and the detection of the expression level of Linc00472 in ccRCC on collected tissue specimens, we have initially constructed the regulatory network of Linc00472 in renal clear cell carcinoma for further study of Linc00472 in ccRCC. e mechanism of action in cell carcinoma and its influence on the diagnosis, treatment, and prognosis of ccRCC provide a theoretical basis in bioinformatics [6]. Data Source and Screening of DEGs. e TCGA ( e Cancer Genome Atlas) database is a joint project initiated by the National Cancer Institute and the Human Genome Institute. TCGA sequenced the whole genome of a variety of tumors and made the sequencing results public for research around the world. e CRN (Cancer RNA-Seq Nexus) database is a comprehensive database jointly developed by the University of Southern California and National Chung Hsing University. CRN systematically collects genome sequencing results from TCGA, SRA (Solicitors Regulation Authority), and GEO (Gene Expression Omnibus) databases, and can directly analyze the expression profiles of tumor transcriptomes (including LncRNA) [7,8]. Download LncRNA and protein-coding genes differentially expressed in ccRCC through the CRN database. e data comes from the RNA-seq expression profile of renal clear cell carcinoma (KIRC) in the TCGA database, including Furman grade I to IV, with a total of 529 cases of cancer tissues (265 cases of grade I, 57 cases of grade II, 126 cases of grade III, and 81 cases of grade IV) and 72 adjacent tissues. For the screening of LncRNA, |log2 (Fold Change)| ≥ 1, FPKM > 0.1, adjusted P < 0.01 is the standard; for the screening of protein-coding genes, |log2 (Fold Change)| ≥ 1, FPKM > 5, adjusted P < 0.01 is the standard. e finally obtained LncRNA and protein-coding genes need to be expressed in grades I to IV that meet the screening conditions. e Expression and Correlation Analysis of Linc00472 in ccRCC. e GEPIA2 (Gene Expression Profiling Interactive Analysis 2) platform was developed by Peking University. It can directly analyze the RNA sequencing expression data of 9736 tumors and 8587 normal samples from the TCGA and GTEx (Genotype-Tissue Expression) databases [9]. e expression level of Linc00472 in 31 tumors was obtained through GEPIA2, and the data were derived from the RNA-seq expression profile of the TCGA database. e expression level of Linc00472 in tumor tissues and adjacent normal tissues was further obtained, and the expression difference of Linc00472 was analyzed in grades I to IV. en, the survival analysis was performed on the high-expression group and the low expression group of Linc00472. Collection of Organization and Clinical Data. In this study, a total of 22 cases of ccRCC tissues and paired adjacent tissues were collected from postoperative patients who had undergone surgery at the Second Hospital of Lanzhou University. e diagnosis and grading of all renal clear cell carcinoma tissues are confirmed by histopathology. Histopathological grading follows the Furman grading method. After the renal clear cell carcinoma tissue specimens were taken out by the surgeon, tumor information was collected within 15 minutes. One part was used for pathological examination and the other part was placed in a cryotube and quickly transferred to a −80°C refrigerator for long-term storage. After obtaining the pathological results, the patient's basic information (including age, gender, tumor size, and pathological grade) is derived from the information management system of the Second Hospital of Lanzhou University to obtain the patient's complete clinical data. Quantitative Real-Time Polymerase Chain Reaction. Total RNAs were extracted from clinical tissue samples and cell lines using TRIzol reagent (Takara, Dalian, China) and were reverse-transcribed into cDNA with a random primer and reverse transcriptase kit (Accurate, Hunan, China) according to the manufacturer's instructions. en, quantitative real-time PCR was performed using TB Green Premix Ex Taq II (Accurate) at an Applied Biosystems 7500 Real Time PCR system based on the manufacturer protocols. e specific primers for LINC00472 were (forward) 5′-TTTTATCCTAGATTGCCACCAC -3′ and (reverse) 5′-TTAGCATCTAGGCCCAGGTT -3′. e specific primers for β-actin were (forward) 5′-CCCTGGACTTCGAG-CAAGAGAT-3′ and (reverse) 5′-GTTTTCTGCGCAAGT-TAGG -3′. Relative LINC00472 expression was normalized to β-actin. Gene Ontology and Pathway Enrichment Analysis. e Co-LncRNA database collected RNA-seq data from 28 human tissues and a total of 29,012 samples, including 133 data sets from TCGA and 108 datasets from GEO. Predict the target gene of LncRNA by coexpression analysis, and analyze the biological function of LncRNA by enriching the target gene [10]. DAVID is an online gene annotation and function enrichment website developed by the LHRI team of Leidos Biomedical Research Company in the United States [11,12]. Gene Ontology (Gene Ontology, GO) is a database established by the Gene Ontology Consortium, which can analyze the biological process (BP), molecular function (MF), and cytological components of genes (cellular components, CC) and carry out functional annotations [13]. Kyoto Encyclopedia of Genes and Genomes (KEGG) is a database established by the Bioinformatics Center of Kyoto University, Japan, which uses genetic information to make calculations and speculations on higher-level and more complex cell activities and biological behaviors. Among them, the KEGG Pathway database stores information on gene pathways in various species [14]. Download data on the coexpression relationship between LncRNA and mRNA identified by Spearman correlation analysis and linear regression correlation analysis through the Co-LncRNA database, and then screen out coexpressed mRNAs that are differentially expressed in grades I to IV (P < 0.01). e selected differential genes were analyzed by GO BP and KEGG Pathway enrichment analysis by DAVID, and the function and approach of P < 0.01 after correction by false discovery rate (FDR) were used to annotate the function of Linc00472 [15]. PPI Network Construction and Module Analysis. e STRING database is a database that analyzes protein-protein interaction (PPI). It collects, scores, and integrates all publicly available protein-protein interaction information, and supplements this information through computational predictions. Currently, the STRING database covers 24,584,628 proteins from 5,090 organisms [16,17]. e coexpressed mRNAs that were differentially expressed in grades I to IV were correlated through the STRING database, and a visual PPI network was constructed by Cytoscope3.8 software. In addition, the analysis of the functional modules of the PPI network was performed through the MCODE application in the Cytoscope3.8 software. e MCODE setting parameters are as follows: degree of interaction cutoff � 2, node score cutoff � 0.2, the maximum depth (max depth) � 100, and the k value (kscore) � 2. en, perform GO BP and KEGG Pathway enrichment analysis on the differential proteins in the module. Target Gene Prediction of Differentially Expressed miRNAs. e AnnoLnc2 platform, developed by Peking University, can fully annotate the sequence and structure, expression and regulation, function and interaction, and evolution and genetic association of human LncRNA in real time. It is an upgraded version of the previous generation platform AnnoLnc [18]. e miRCancer database is a miRNA-tumor association database constructed based on literature text mining, and through PubMed text mining, the miRNA-tumor association data are regularly updated [19]. e miRWalk database is a cross-prediction database developed by the Medical Research Center of Mannheim Medical School of Heidelberg University that can predict target genes by miRNA and predict miRNA by target genes [20]. e miRNA that interacts with Linc00472 was obtained by AnnoLnc2. Since the expression of Linc00472 is decreased in renal clear cell carcinoma, the expression of miRNA that interacts with Linc00472 should increase as a ceRNA. en, the miRNAs that have been studied in ccRCC were collected through the miRCancer database, and the miRNAs obtained in AnnoLnc2 were further filtered to obtain more reliable results. e filtered miRNAs were predicted in the miRWalk database for target genes, correlated with coexpressed differential genes in PPI, and screened out miRNAs and their target genes that interact with Linc00472 and constructed a regulatory network, which was visualized by Cytoscope3.8 software. In addition, the coexpressed differential genes and the selected miRNAs in each group of modules were analyzed separately, and a regulatory network was constructed, which was visualized by Cytoscope3.8 software to predict key target genes. Statistical Analysis. e normal distribution test was performed on the expression difference between the cancer tissues of 22 patients with ccRCC and the paired adjacent normal tissues. If the normal distribution is met, the paired t-test is used for analysis; if the normal distribution is not met, then after correction the paired t test was used for analysis. According to the expression level of Linc00472 in cancer tissues, it was divided into the high-expression group and the low-expression group with the median as the cutoff point. Fisher's exact test was used to analyze the correlation between the expression of Linc00472 and clinicopathological indicators, including gender, age, tumor size, and pathological grade. When P < 0.05, it is considered statistically significant. e statistical software is GraphPad Prism 8.0 and IBM SPSS 25. Differentially Expressed LncRNAs and Protein-Coding Genes in ccRCC. Figure 1(a) is a volcano map of LncRNA differentially expressed in KIRC retrieved from the Cancer RNA-Seq Nexus database. e screening criterion is |log2 (Fold Change)| ≥ 1, and P < 0.01 after correction. Figure 1(b) is a heat map of differentially expressed LncRNAs that meet the screening criteria in grade I to IV cancer tissues and adjacent tissues. A total of 359 differentially expressed LncRNAs that met the criteria were screened, of which 243 were upregulated and 116 were downregulated. A total of 1245 protein-coding genes that meet the criteria for differential expression were screened, of which 679 were upregulated and 566 were downregulated. e Significantly Lower Expression of Linc00472 Is Associated with High Grade and Prognosis. We excluded LncRNAs that were not differentially expressed in grade I to IV cancer tissues. When the selected genes are closely related to the patient's prognosis, it will be more valuable for research. According to research of Wang et al. [15], 11 LncRNAs, 3 mRNAs, and 3 miRNAs in ccRCC are related to overall survival; 4 LncRNAs and 1 mRNA are verified as independent prognostic factors, of which Linc00472 is in ccRCC and was not studied in depth. Combined with our screening results, we noticed that the expression of Linc00472 in grade I to IV cancer tissues differed greatly, and its log2 (fold change) was −2.17, −2.25, −2.69, and −2.85. In order to observe the expression of Linc00472 more intuitively, the expression level of Linc00472 in 31 types of tumors and normal tissues was obtained through GEPIA2 analysis (Figure 2(a)), and it can be observed that the expression of Linc00472 in ccRCC is significantly reduced. As shown in Figure 2 (P < 0.05). After analyzing the expression level of Linc00472 in cancer tissues of different pathological grades, it was found that the expression of Linc00472 was higher in grades I and II than in grades III and IV (Figure 2(c)). We also analyzed the relationship between the expression level of Linc00472 and prognosis in GEPIA2. As shown in Figure 2(d) and Figure 2(e), the patients in the Linc00472 high-expression group must be in overall survival (OS) or disease-free survival (DFS). It is significantly better than the patients in the low-expression group. e expression level of Linc00472 is closely related to the prognosis of patients (P < 0.05), suggesting that Linc00472 may be an independent prognostic indicator of ccRCC. Linc00472 Is Lowly Expressed in ccRCC Tissues and Associated with High Grade. In order to verify the analysis results of the TCGA database, we performed qRT-PCR verification on the cancerous tissues of 22 patients with ccRCC and their paired adjacent normal tissues ( Figure 3). e pairing was performed after natural log correction. Table 1 showed that the expression of Linc00472 was decreased in ccRCC tissues (P < 0.0001). We divided 22 patients into the high-expression group and the low-expression group according to the median expression level of cancer tissues of 22 patients. e statistical results show that the expression level of Linc00472 has no significant correlation with patient gender (P � 1.000), age (P � 0.387), and tumor size (P � 0.395), and the expression level of Linc00472 in grade I and II cancer tissues was significantly higher than that in grade III and IV cancer tissues (P � 0.024). is indicates that the expression level of Linc00472 is closely related to Furman nuclear grade of renal clear cell carcinoma, suggesting that Linc00472 plays an important role in the progression of ccRCC. Enrichment Analysis of Linc00472 Coexpressed Differential Genes. LncRNA can regulate target genes through a variety of ways to exert its biological functions. In order to study the possible impact of Linc00472 coexpressed genes on the occurrence and development of ccRCC, we screened the protein-coding genes that were coexpressed with Linc00472 downloaded from the Co-LncRNA database, and performed GO BP and downregulated differentially coexpressed genes, respectively. e results of KEGG Pathway Enrichment Analysis showed that there were 998 coexpressed genes that were differentially expressed in grade I to IV cancer tissues and adjacent tissues, including 519 upregulated genes and 479 downregulated genes. e results of GO BP and KEGG Pathway enrichment analysis are shown in Figure 4. Among them, the GO BP analysis of the upregulated coexpressed differential gene showed that it is mainly involved in immune response, interferon-gamma-mediated signaling pathway, inflammatory response, angiogenesis, response to hypoxia, and other processes (Figure 4(a)). KEGG Pathway analysis results show that it is enriched in a variety of diseases and pathways, such as Staphylococcus aureus infection, viral myocarditis, allograft rejection, graft-versushost disease, antigen processing and presentation, etc. genes. e larger the node, the higher the degree of interaction. e red nodes represent upregulated genes, and the blue nodes represent downregulated genes. e depth of the node color represents the expression level of the gene, and the width of the edge represents the correlation score of the two genes, that is, the closeness of the correlation. (b) e first two modules selected by MCODE. Red represents upregulated genes, and blue represents downregulated genes. e shade of the node color represents the expression level of the gene, and the width of the edge represents the correlation score of the two genes. e GO BP analysis results of the downregulated coexpressed differential genes showed that they are mainly involved in the metabolic process, fatty acid betaoxidation, tricarboxylic acid cycle, oxidation-reduction process, gluconeogenesis, and other processes (Figure 4(c)). KEGG Pathway analysis results show that it is mainly enriched in metabolic pathways (Figure 4(d)). e enrichment analysis results of GO BP and KEGG Pathway suggest that differential genes coexpressed with Linc00472 may have an impact on the key process of tumorigenesis and development. An in-depth study of Linc00472 may provide favorable conditions for further revealing the molecular mechanism of renal clear cell carcinoma. PPI Network Construction and Functional Module Analysis. In order to further understand the interaction between the differential genes coexpressed with Linc00472, and to find genes that may play a major function in the Linc00472 regulatory network, we used the STRING server to associate 998 coexpressed differential genes with Cyto-scope3.8 software to perform visualization ( Figure 5(a)). A total of 578 nodes and 2225 edges were obtained. Some of the nodes have a high degree of association, such as APP, degree � 53; GNAI1, degree � 30 in downregulated genes. However, most of the genes with a high degree of association are upregulated genes, such as C3, degree � 44; C3aR1, degree � 36; B2M, degree � 41; HLA-A, degree � 36, HLA-E, degree � 36; HLA-C, degree � 35; HLA-DRB1, degree � 35; and HLA-DRA, degree � 35. In addition, after MCODE analysis, the first two modules selected from the PPI network are shown in Figure 5 Table 2. In Module 1, most genes are clustered in immune response and interferon-gamma-mediated signaling pathway, accounting for more than 50%, while the results of KEGG have no obvious specificity. In Module 2, the results of GO and KEGG account for a small proportion. Target Gene Prediction of miRNA Interacting with Linc00472. In order to further study the complete action path of Linc00472, it is also necessary to find miRNAs that interact with Linc00472 as a ceRNA. By predicting the target genes of miRNAs, the possible Linc00472-miRNA-mRNA pathways can be screened out of the differential genes coexpressed with Linc00472. e miRNAs that interact with Linc00472 we obtained in AnnoLnc2 were filtered by miRCancer database, and their target genes were predicted by miRWalk and correlated with coexpressed differential genes in the PPI network. A total of 42 miRNAs that interact with Linc00472 were obtained, and a network of miRNAs and their target genes was constructed (Figure 6(a)). In order to observe the key target genes more intuitively, we screened out miRNAs and their target genes in two modules ( Figure 6(b)). Module 1 has 31 miRNAs interacting with Linc00472 and 26 target genes; Module 2 has 39 miRNAs interacting with Linc00472 and 53 target genes. Discussion ere are few studies on Linc00472 in renal clear cell carcinoma, and there are still many blanks on what function it plays in ccRCC. erefore, studying the mechanism of Linc00472 is of great significance in the diagnosis and treatment of ccRCC. Wang et al. [21] conducted a network analysis of ceRNA in ccRCC and found that 11 LncRNA, 3 mRNA, and 3 miRNA were related to overall survival; 4 LncRNA and 1 mRNA were verified as independent prognostic factors. It also contains Linc00472 of this research, and the result is consistent with the analysis result of this research. In this study, the expression level of Linc00472 in clinical samples was verified, and it was found that the expression level of Linc00472 was lower in higher grade cancer tissues, which was consistent with the results of data analysis in TCGA. For Linc00472 to become an independent prognostic factor in ccRCC, further observation and followup are needed to verify. Linc00472 has also done some research in other tumors (lung cancer, colorectal cancer, liver cancer, breast cancer, etc.) [22]. Zou et al. [23] found that Linc00472 played a tumor suppressor effect in the KLLN-mediated p53 signaling pathway by downregulating the expression of miRNA-149-3p and miRNA-4270 in nonsmall cell lung cancer. Mao et al. [24] found that Linc00472 can inhibit the growth of lung cancer cells by downregulating the expression of miR-196b-5p. Su et al. [25] found that Linc00472 inhibited the proliferation of lung adenocarcinoma cells and promoted their apoptosis by downregulating the expression of miR-24-3p and DEDD (death effect domain protein). In colorectal cancer, Linc00472 may be downregulated due to hypermethylation of DNA [26], by downregulating the expression of miR-196a to upregulate the expression of PDCD4 (apoptosis-related protein 4), exerting a tumor suppressor effect [27]. Also in liver cancer, Linc00472 inhibits the proliferation, migration, and invasion of liver cancer cells through the miR-93-5p/PDCD4 pathway [28]. In breast cancer, the expression of Linc00472 is also regulated by promoter methylation [29], in which ERα (estrogen receptor α) can inhibit the phosphorylation of NF-κB by upregulating the expression of Linc00472 [30]. In addition, Zhang et al. [31] found that the downregulation of Linc00472 can reduce the expression of FOXO1 through miR-300 and promote the occurrence of osteosarcoma. LncRNA is a noncoding RNA with a length of more than 200 nucleotides. It has a wide range of biological functions and can affect a variety of signaling pathways, but not all of them are critical pathways. erefore, we need to combine current research to find the most likely key pathways. For the above research, this study also found that Linc00472 interacts with miR-24-3p. According to reports, the expression level of miR-24-3p is elevated in a variety of malignant tumors, including lung cancer [32][33][34], liver cancer [35], breast cancer [36], bladder cancer [37], nasopharyngeal cancer [38] etc., is considered to be an oncogene. erefore, it may also act as a ceRNA that interacts with Linc00472 to promote tumor cell proliferation, migration, and invasion in ccRCC. e main genes regulated by miR-24-3p in the two modules are HLA-DPB1, CXCL9, PLOD3, SLC2A5, STK10, TNFRSF1B, CD36, COL4A1, and SERPINA1. Among them, HLA-DPB1 and CXCL9 are the genes screened in Module 1, and the rest are the genes screened in Module 2 [39]. HLA-DPB1 is a member of HLA-II antigens, while HLA (human leukocyte antigen) is the human MHC (major histocompatibility complex). HLA is divided into three subclasses: class I antigens, including classical HLA-A, HLA-B, and HLA-C with high polymorphism, and nonclassical HLA-E, HLA-F, and HLA-G with limited polymorphism; Class II Antigens, including HLA-DPA1, HLA-DPB1, HLA-DQA1, HLA-DQA2, HLA-DQB1, HLA DQB2, HLA-DRA, HLA-DRB1, HLA-DRB2, HLA-DRB3, HLA-DRB4 and HLA-DRB5, as well as low variability involved in antigen processing and presentation Genes; Class III antigens, including genes related to inflammation, white blood cell maturation, and the complement cascade [40]. HLA is a presentation molecule of endogenous and exogenous antigens, and is widely involved in the process of human immune response. During the development of cancer, the immune system processes tumor cells through the three stages of immune editing (i.e, clearance, balance, and escape). In the end, tumors will escape the control of the immune system, leading to complete uncontrolled growth and widespread metastasis [41]. Tumor cells escape through a variety of mechanisms, including low expression of tumor surface antigens, making it difficult for the immune system to monitor them; the secretion of immunosuppressive factors (such as transforming growth factor beta, interleukin-10) and different regulation and induction of sexual lymphocytes or myeloid cells (such as regulatory T cells, bone marrow-derived suppressor cells); and downregulation or complete loss of HLA-I antigen expression to avoid the recognition and killing of cytotoxic T cells [42]. In this study, the expressions of HLA-I antigens screened from the PPI network were all upregulated, indicating that ccRCC cells may not completely trigger the immune escape mechanism through the downregulation or loss of HLA-I antigen expression. e carcinogenesis of cells is not only related to the change of HLA-I antigens but also related to the expression of HLA-II antigens. According to reports, more than 80% of breast ductal carcinomas lack the expression of HLA-II antigens [43]. In contrast, approximately 50% of papillary thyroid cancers and 60% of primary melanomas express HLA-II antigens, indicating increased expression of HLA-II antigens in these tumor types [44,45]. e prognosis of different tumors is also related to the expression of HLA-II antigens. In colorectal cancer [46][47][48], laryngeal cancer [49], oropharyngeal cancer [50], HLA-II antigens are highly expressed and have a good prognosis. In melanoma and cervical cancer, HLA-II antigens are also highly expressed, but the prognosis is poor [51,52]. e heterogeneity of HLA-II antigen expression in different tumors and the different prognosis indicate that it can play different roles in different tumors. In this study, the expression of HLA-II antigens in ccRCC is upregulated, indicating that it may play an important role in the development of ccRCC and affect the prognosis of patients. From the enrichment results of Module 1, it can be seen that, in the immune response, MHC class II antigen processing, peptide or polysaccharide antigen presentation, antigen processing and presentation, and other significant enrichment processes, HLA-II antigens are involved and play an important role in functional modules. erefore, miR-24-3p may promote the progress of ccRCC by upregulating HLA-DPB1, which needs further verification. CXCL9 is also known as interferon-c (IFN-c)-induced monocytes, which is a selective ligand of CXCR3 (CXC subfamily chemokine receptor 3), and CXC subfamily is one of the four subfamilies of chemokines (CC, CXC, CX3C, and XC). e CXCL9-CXCR3 pathway can exert antitumor effects. In melanoma, the tumor area with high expression of CXCL9 has significant T-cell infiltration, which may be necessary to control tumor growth through IFN-c-dependent pathways [53]. Another study found that CCL5 required for T cell infiltration was amplified by CXCL9 secreted by myeloid cells mediated by IFN-c. Tumors that cohighly express CCL5 and CXCL9 show higher immune reactivity and a higher possibility of blocking immune checkpoints [54]. In skin tumors, lack of CXCL9 can still produce CXCL10, but cannot recruit cytotoxic CD8+ T cells, which leads to tumor generation and promotes tumor growth [55]. On the other hand, CXC chemokines that do not contain ELR (glutamate, leucine, and arginine) such as CXCL9 can inhibit angiogenesis. In nonsmall cell lung cancer cells, the overexpression of CXCL9 can inhibit tumor progression and metastasis by reducing tumor-derived blood vessel density [56]. It has also been confirmed in animal models that the combination of CXCL9 and low-dose cisplatin can inhibit angiogenesis and induce tumor cell apoptosis [57]. However, in humans, there are at least three mRNA splice variants of CXCR3, i.e., CXCR3A, CXCR3B, and CXCR3-alt. Among them, CXCR3A and CXCR3B combined with CXCL9 play a role in the regulation of angiogenesis. Overexpression of CXCR3B can promote the apoptosis of microvascular endothelial cells and inhibit angiogenesis. However, the overexpression of CXCR3A can enhance cell viability, promote proliferation, and enhance the ability of blood vessel formation [58]. is indicates that CXCL9-CXCR3 has a two-way regulatory effect on tumors and can promote tumor invasion and migration. Also in melanoma, CXCL9 can promote tumor migration through chemotaxis [59]. Adding exogenous CXCL9 to tongue squamous cell carcinoma cells expressing CXCR3 can promote cell invasion and migration as well as the EMT process [60]. In addition, it has been reported that prostate cancer cells recruit more CD4+ T cells by secreting more CXCL9. e recruitment of CD4+ T cells into the tumor may lead to increased invasion ability of prostate cancer cells [61]. erefore, CXCL9 can not only play an antitumor effect but also promote tumor growth and metastasis, both of which can regulate tumor development. In this study, the expression of CXCL9 was upregulated, indicating that CXCL9 may have a relatively dominant role in promoting tumor growth and metastasis in ccRCC. From the enrichment results of Module 1, it can be seen that CXCL9 plays a role in the significantly enriched interferon-c-mediated signal pathway and inflammatory response process, and may also be an important factor affecting the function of Module 1. erefore, miR-24-3p may play the role of CXCL9 in promoting tumors by upregulating the expression of CXCL9, thereby promoting the growth and metastasis of ccRCC cells. In Module 2, there are 7 genes regulated by miR-24-3p, which are PLOD3, SLC2A5, STK10, TNFRSF1B, CD36, COL4A1, and SERPINA1. e expression of PLOD3 (lysine hydroxylase 3) is increased in a variety of tumors, including lung cancer, liver cancer, and gastric cancer. In lung cancer, PLOD3 can promote the metastasis of lung cancer by regulating STAT3, and inhibiting the expression of PLOD3 can have an antitumor effect by regulating the PKC-δ signaling pathway [62,63]. In liver cancer, PLOD3, BANF1, and SF3B4 are jointly selected as molecular markers for early diagnosis and screening of liver cancer [64]. In gastric cancer, the overexpression of PLOD3 is associated with the poor prognosis of gastric cancer [65]. e increased expression of PLOD3 in ccRCC suggests that it may play a role in the occurrence and development of ccRCC. SLC2A5 is the gene encoding fructose transporter 5 (GLUT5). Studies have shown that the high expression of GLUT5 in ccRCC aggravates tumor cell proliferation and colony formation [66]. TNFRSF1B is the coding gene of TNFR2 (Tumor Necrosis Factor Receptor II), and TNFR2 has been considered as a new target for tumor immunotherapy [67]. e immunotherapy targeting TNFR2 in ccRCC needs further research. CD36 is a fatty acid translocase, which plays an important role in the transport of long-chain fatty acids. e latest research found that CD36 is selectively upregulated in regulatory T cells in tumors. Knockout of CD36 reduced regulatory T cells enhanced the antitumor activity of lymphocytes infiltrated in the tumor, inhibited tumor growth, but did not destroy immune homeostasis [68]. In ccRCC, the high expression of CD36 has been verified, and it is positively correlated with visceral fat content, indicating a poor prognosis for patients [69]. CD36 may play an important role in the occurrence and development of ccRCC, and its influence on fat metabolism needs further study. COL4A1 (type IV collagen α1) can be used as a prognostic biomarker in urothelial carcinoma [70], According to the results of the ceRNA network analysis of ccRCC by Zheng et al. [15], COL4A1 is related to the overall survival of patients. Whether COL4A1 can be used as a prognostic marker for ccRCC needs further verification. SERPINA1 is a gene encoding AAT (α-1 antitrypsin), which is highly expressed in nonsmall cell lung cancer and plays an active role in the development of lung cancer [71]. Whether the high expression of SERPINA1 in ccRCC also plays a positive role in the development of ccRCC needs further verification. In short, the genes regulated by miR-24-3p in Module 2 are all upregulated genes, and the functional enrichment score is lower than that in Module 1. From the results of enrichment, only COL4A1 is significantly enriched in the catabolism of extracellular matrix and collagen. e development process of tumors is complex; these genes may also play a role in ccRCC through the regulation of miR-24-3p, and the main functions may be HLA-DPB1 and CXCL9. However, recent research on TNFR2 and CD36 has made new progress, and COL4A1 may also become a new prognostic marker. is has produced new ideas for studying the mechanism of ccRCC and is worthy of further study. It can be observed from the PPI network that upregulated genes, except for the various subtypes of HLA, are C3, C3aR1, and B2M with the highest degree of association, which are all located in Module 1. According to the target gene prediction of Module 1, the only upregulated gene is C3aR1. Recent studies have shown that in purified vascular endothelial cells, the function of VEGFR2 requires the simultaneous presence of C3aR1/C5aR1 and IL-6R-gp130 signal transduction. And, in animal models, enhancing C3aR1/C5aR1 signal transduction will accelerate angiogenesis [72]. VEGFR2 can combine with VEGFA to regulate angiogenesis, and affect tumor growth and metastasis through the HIF pathway. Based on the results of target gene prediction, it is observed that C3aR1 and VEGFA are jointly regulated by miR-221-3p. e enrichment results of Module 2 also showed significant enrichment in angiogenesis and response to hypoxic conditions. Studies have explored the effect of high expression of miR-221-3p in ccRCC on the efficacy of TKI. Overexpression of miR-221-3p is associated with poor progression-free survival, while VEGFR2 is associated with longer survival [73,74]. en, whether miR-221-3p can be used as a ceRNA that interacts with Linc00472 in ccRCC to change the expression level of VEGFR2 by upregulating C3aR1, thereby regulating the growth and metastasis of ccRCC through the HIF pathway, needs further verification. APP and GNAI1 have the highest degree of association between downregulated genes, which are also located in Module 1. However, the enrichment results show that downregulated genes are significantly enriched in various metabolic processes, and APP and GNAI1 are not involved. Moreover, according to the analysis results, the genes that may play an important role in Module 1 are all upregulated genes, indicating that although APP and GNAI1 are highly correlated, they do not perform important functions. erefore, we focused on the genes that are upregulated in the PPI network. As mentioned in the previous article, HLA-II antigens may play an important role in the entire functional module, so we have paid attention to miR-17-5p. As shown by the target gene prediction results, miR-17-5p targets and regulates the expression of HLA-DQA1 and HLA-DQB1, as well as the expression of C3aR1 and C5aR1. Studies have found that the expression level of miR-17-5p in ccRCC is upregulated, its target is TRIM8, and it can connect p53 to the N-MYC pathway. Overexpression of miR-17-5p can inhibit TRIM8. On the one hand, it leads to a decrease in the stability of p53 tumor suppressor protein; on the other hand, it activates the oncogene N-MYC and promotes tumor cell proliferation [75]. In this study, miR-17-5p may affect the proliferation, migration, and invasion of ccRCC cells by upregulating the expression of HLA-DQA1 and HLA-DQB1; on the other hand, it may increase the expression of C3aR1 and C5aR1 in the classical HIF pathway. Linc00472 may also play an important role as a ceRNA that interacts with it. In conclusion, the expression level of Linc00472 in ccRCC tissues is significantly lower than that in normal tissues adjacent to the cancer. Its low expression is related to Furman's high grade and poor prognosis. e results of protein interaction and functional enrichment analysis indicate that genes upregulated in ccRCC may play a major role. Analysis of target gene prediction results indicated that Linc00472 may act as ceRNA in miR-24-3p-HLA-DPB1 pathway, miR-24-3p-CXCL9 pathway, miR-221-3p-C3aR1-VEGFR2 pathway, miR-17-5p-HLA -DQA1/HLA-DQB1 pathway, and miR-17-5p-C3aR1/ C5aR1-VEGFR2 pathway, and play an important role. In addition, the regulatory relationship between miR-24-3p and TNFR2, CD36 and COL4A1 should also be noted. In the next research, we will continue to expand the number of tissue samples to verify, and regularly follow-up, matched patients to analyze the prognosis, and verify the pathways that may play an important role at the cellular level and animal models, and closely integrate with the clinic to explore the role of Linc00472 in the diagnosis and treatment of ccRCC. Data Availability e data that support the findings of this study are available from the corresponding author upon reasonable request. Conflicts of Interest e authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
2021-06-16T08:11:03.368Z
2021-06-09T00:00:00.000
{ "year": 2021, "sha1": "ee1628bf1e3e3c0d6f2a6f05d8777b8031578c65", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/jhe/2021/3533608.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ee1628bf1e3e3c0d6f2a6f05d8777b8031578c65", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
19292238
pes2o/s2orc
v3-fos-license
String Theory in Electromagnetic Fields A review of various aspects of superstrings in background electromagnetic fields is presented. Topics covered include the Born-Infeld action, spectrum of open strings in background gauge fields, the Schwinger mechanism, finite-temperature formalism and Hagedorn behaviour in external fields, Debye screening, D-brane scattering, thermodynamics of D-branes, and noncommutative field and string theories on D-branes. The electric field instabilities are emphasized throughout and contrasted with the case of magnetic fields. A new derivation of the velocity-dependent potential between moving D-branes is presented, as is a new result for the velocity corrections to the one-loop thermal effective potential. I. INTRODUCTION Superstring theory is a fundamental candidate for a theory of quantum gravity because its elementary closed string spectrum naturally induces background fields of ten-dimensional supergravity. Among the bosonic fields one finds, in addition to the metric tensor of ten-dimensional spacetime, a torsion Neveu-Schwarz two-form field as well as higher and lower degree differential form Ramond-Ramond fields. The former field, when it is closed or equivalently on-shell, is formally equivalent to a background electromagnetic field strength tensor in spacetime, while the latter ones are the objects which couple to D-branes, the extended hyperplanes in spacetime onto which open strings attach (with Dirichlet boundary conditions). Dp-branes are p-dimensional soliton-like objects whose quantum dynamics are described by the quantum theory of the open strings whose ends are constrained to move on them. In the lowenergy limit, there are a finite number of massless fields which survive whose dynamics are described by a p + 1-dimensional effective quantum field theory. One of these fields is a U(1) gauge field. Therefore, understanding the behaviour of strings and D-branes in the presence of electromagnetic fields is important for the description of non-perturbative vacuum states in superstring theory. Furthermore, using duality, this problem is also important for understanding various aspects of D-brane dynamics. In this review article we shall focus on a particularly tractible problem, that of open strings and D-branes in constant background electric and magnetic fields. These models have attracted renewed interest very recently because they give an explicit realization of some old conjectures about the nature of spacetime at very short distance scales. If one is to use string states as probes of short distance structure, then one cannot probe lengths smaller than the intrinsic length of the strings. Therefore, below the string scale the notion of geometry must drastically change, and an old proposal is that the spacetime coordinates become noncommuting operators. The deformation of D-brane worldvolumes to noncommutative manifolds by the external electromagnetic field has led to a revival of interest in these earlier suggestions. In addition, the effective low-energy dynamics can be described by new, noncommutative versions of ordinary quantum field and string theories, and hence a wealth of new problems for both field theorists and string theorists. Motivated by these issues, in the following we will present an overview of some of the fundamental aspects of string theory in electromagnetic fields. The qualitative effects can all be seen at the level of the simpler bosonic string theory, which we will confine most of our attention to in this paper. As we indicate throughout, the results readily extend to the case of superstrings. Many of the novel effects exhibited by strings in background fields can be seen at the level of free open strings, or equivalently (in Type IIB superstring theory) for D9-branes which fill the spacetime. This is the topic of section II. We will derive the effective gauge field dynamics for the open strings up to one-loop order in string perturbation theory, and describe the spectrum of the string theory. We shall also start seeing here some important differences between electric and magnetic backgrounds in superstring theory. While strings in external magnetic fields possess no more instabilities than the quanta of Yang-Mills gauge theory, electric backgrounds play a much different role in string theory. In addition to the usual instability of the vacuum in an electric background that occurs in quantum electrodynamics, strong electric fields can tear apart a string and render both the classical and quantum theories physically meaningless. As we have mentioned, string theory exhibits a variety of novel effects at very high energies. Non-trivial background fields may also have an effect on the properties of strings in this regime. In particular, one can examine how the external fields modify the behaviour of strings at high temperatures, where they are known to undergo a phase transition into a sort of deconfining phase in which the strings propagate as long string states in the spacetime. Free strings at finite temperature and in background electromagnetic fields will be analysed in section III. One of the most important applications of the external field problem for free open strings is its interpretation in the T -dual picture, where it maps onto the problem of moving D-branes. This problem is dealt with at length in section IV. Here we present a new derivation, which contains some novel technical details that may be of use for other calculations, of the well-known scattering amplitude between two D-branes travelling at constant velocity. The corresponding thermodynamic problem is particularly interesting in this case. A special class of black holes in string the-ory admits a dual description as a configuration of D-branes. By using the quantum string theory living on the D-brane, one can compute the Bekenstein-Hawking entropy and the rate of thermal radiation from the black hole. The corresponding Hawking temperature is conjectured to be the same in this case as the extrinsic temperature of a Boltzmann gas of D-branes. These features of the thermal ensemble of D-branes can be checked by computing the free energy using the effective, low energy description of D-brane dynamics in terms of supersymmetric Yang-Mills theory with 16 supercharges. In section IV we shall also present new results for the leading velocity corrections to the one-loop thermal potential between D-branes. The final instance of the constant external field problem, which we address in section V, is to study the properties of D-branes themselves in the electromagnetic background. Here we shall focus on the geometric modifications that are caused by the external field. We shall see that, generically, the D-brane worldvolume is not a conventional manifold and is described by a noncommutative space. This is again a particular effect of the quantum open string theory that lives on the D-branes. Here we shall see a particularly drastic distinction between electric and magnetic fields. In a particular low-energy limit, the effective dynamics of the noncommutative D-branes is described in the magnetic case by a deformation of the usual gauge field dynamics on the branes, while in the electric case there is no field theory limit and the effective theory is a deformation of the usual open string theory on the D-branes. In this latter case, the noncommutativity is given directly in terms of the string scale, and the most intesting aspect of this open string theory on the noncommutative manifold is that it does not contain closed strings. In particular, it is a novel example of a string theory which does not contain gravity. We can expect that these theories capture many of the important features of the standard string theories, but without being plagued by the conceptual problems that arise due to the presence of gravitation. II. OPEN STRINGS IN BACKGROUND GAUGE FIELDS In this section we will start describing some of the basic physical properties of strings in an external electromagnetic field. An external gauge field couples to an open string through Chan-Paton factors at the string endpoints. Therefore, because of the Green-Schwarz anomaly cancellation condition, all of our considerations in this section and the next strictly speaking only apply to Type I superstrings, since Type II superstring theory has no gauge group. The gauge field is then associated with a subgroup of the SO(N) gauge group of Type I string theory, where N = 2 d/2 and d is the dimension of spacetime which we will assume is even. By an electromagnetic field we will mean one that is associated with an abelian subgroup of this gauge group. However, we will only write down explicit formulas which also pertain in principle to Type II superstrings, as they will become relevant in sections IV and V when the open strings will attach to D-branes which can host electromagnetic fields in the guise of a Neveu-Schwarz two-form field. A. The Born-Infeld action In this subsection we will derive the lowenergy effective action which governs the propagation of free open strings in a slowly-varying background electromagnetic field F µν [25,1,16] (See [57] for a review). In string perturbation theory and in the RNS formulation, the vacuum energy may be computed in first quantization from the Polyakov path integral where g s is the string coupling constant whose powers weight the genus h of the open string worldsheet which has Euclidean metric g ab (and superpartner the two-dimensional gravitino field χ ab ), and the sum over spin structures σ with the appropriate weights imposes the GSO projection that leads to modular invariance, a tachyon-free spectrum, and spacetime supersymmetry of the string theory. We will assume in this section that the target space has flat Euclidean metric δ µν . The first contribution to the partition function (2.1) comes from the disc diagram Σ, which by conformal invariance of the classical theory can be parametrized by coordinates z = r e iϑ with 0 ≤ r ≤ 1 and 0 ≤ ϑ ≤ 2π. The bosonic part of the tree level string action in the conformal gauge is , (2.2) where here and in the following a dot will denote differentiation with respect to the worldsheet boundary coordinate ϑ. The quantity T s = 1/2πα ′ is the string tension. The endpoints of the open strings carry charges e which couple to the electromagnetic vector potential A µ (x). When inserted into (2.1), the action (2.2) leads to the expectation value, with respect to the free worldsheet σ-model (the bulk term in (2.2)), of the Wilson loop operator for the gauge field A µ over the boundary of Σ. To evaluate it, we use the background field approach and expand the string embedding fields as x µ = x µ 0 + ξ µ , where x µ 0 are their constant zero modes. The tree-level contribution to (2.1) then involves the propagator G µν (z, is the Neumann function for the disc which satisfies the equation of motion ∇ 2 N(z, z ′ ) = δ(z − z ′ ) and the Neumann boundary condition ∂ r N(z, z ′ )| r=1 = 0. 1 On the boundary of the worldsheet, where z = e iϑ , the Green's function (2.3) becomes where ε → 0 + is an ultraviolet cutoff which regulates the logarithmic short-distance ϑ → ϑ ′ singularity of (2.4), N(ϑ, ϑ) = 1 π ln ε. We will work in the radial gauge ξ µ A µ (x 0 + ξ) = 0, A µ (x 0 ) = 0, and with slowly varying gauge fields which admit . In the following we will evaluate the vacuum amplitude to leading orders in the expansion in derivatives of the field strength tensor F µν . After integrating out the bulk values of the string coordinates in the interior of the disc, the bosonic sector of the Polyakov path integral (2.1) at tree-level and in the conformal gauge becomes where the effective boundary action is (2.6) Here N −1 denotes the coordinate space inverse of the boundary Neumann function (2.4) which is given explicitly by where we have used the completeness relation The non-constant string modes ξ µ can be written on ∂Σ in terms of the Fourier series expansion for periodic functions on the circle, a µ n cos nϑ + b µ n sin nϑ . (2.9) The low-energy string effective action is then given by the renormalized value of (2.5). 2 To evaluate the path integral (2.5), we use Lorentz invariance to rotate to a basis in which the antisymmetric d × d matrix F µν (x 0 ) is skewdiagonal with skew-eigenvalues f ℓ , ℓ = 1, . . . , d 2 . Then, on substituting (2.6), (2.7) and (2.9) into (2.5), the path integral factorizes into a product of (2.11) The divergent infinite product n 1 n 2 in (2.11) can be regulated using the ultraviolet cutoff ε and may be absorbed into a renormalization of the string coupling constant by using zeta-function regularization [25]. The other factor in (2.11) is also finite in zeta-function regularization where ζ(s) = n 1 n s is the Riemann zetafunction with ζ(0) = − 1 2 . We thereby find that It is a curious property of the Polyakov path integral that it computes directly the vacuum energy. The reason becomes clearer in the effective action approach [1] whereby conformal invariance is used to derive the variational equations of a spacetime effective action for the background fields. The string partition function is quite different from that of quantum field theory, in that it is more like an S-matrix. back to general form, the regularized partition function (2.10) can be written in a Lorentz invariant way and it leads to the effective string action which describes a model of non-linear electrodynamics for the field strengths that is governed by the classic Born-Infeld Lagrangian [13]. The Euler-Lagrange equations for the action (2.13) can be written as is the one-loop worldsheet β-function [1,17], with δA µ (ε) the cutoff dependent gauge field correction term that multipliesẋ µ in (2.2). The equations of motion for the gauge field are therefore equivalent to worldsheet conformal invariance of the quantum string theory. They are the stringy O(α ′ ) corrections to the Maxwell field equations for A µ . Notice that since det(1 1 + 2πα ′ eF ) = det(1 1 + 2πα ′ eF ) ⊤ = det(1 1 − 2πα ′ eF ), the Born-Infeld action (2.13) contains only even powers of the field strength F in an α ′ expansion. The leading order term (the field theory limit) is given by the Maxwell Lagrangian − e 2 T d/2−1 s 4(2π) d/2 gs F 2 µν . For the uniform electromagnetic backgrounds that we shall deal with in most of this article, the calculations will thereby produce on-shell string amplitudes. The Born-Infeld action is an example whereby the contributions in the coupling constant α ′ , representing the string corrections to the field theory limit, can be summed to all orders of σ-model perturbation theory. Born-Infeld theory has many novel characteristics which distinguish it from the classical Maxwell theory of electromagnetism. These novel features are predominant for a purely electric background field, which in Minkowski space would have only nonvanishing temporal components F 0j = iE j . Then, the electric field generated by a point-like charge is regular at the source and its total energy is finite [13]. The effective distribution of the field has a radius of the order of the string length scale √ α ′ , and the delta-function singularity is smeared away. This is quite unlike the situation in Maxwell theory, whereby the field of a point source is singular at the origin and its energy is infinite. The analogy with open string theory has been used to suggest that the terms of higher order in α ′ in the string effective action may eliminate Schwarzschild black hole singularities. Furthermore, at the origin of the source the electric field takes on its maximum value | E| = E c . The Born-Infeld Lagrangian in this case takes the form 1 − (2πα ′ e E ) 2 , which shows that there is a limiting value E c = T s /e such that for | E| > E c the action becomes complex-valued and ceases to make physical sense [25,14,45]. This instability reflects the fact that the electromagnetic coupling of strings is not minimal [5] and creates a divergence due to the fast rising density of string states. For field strengths larger than the critical electric field value E c , the string tension T s can no longer hold the strings together. We shall encounter other novel aspects of strings in background electric fields throughout this paper. Notice, however, that such novel effects and instabilities do not arise in purely magnetic backgrounds. Going back to the case of Euclidean signature, this calculation may be extended to the next order in string perturbation theory, whose contribution is the annulus diagram Σ which by scale invariance can be taken to have outer radius 1 and inner radius a = e −πt ∈ [0, 1]. The variable a is therefore the modulus of the annulus and the path integration in (2.1) over metrics g ab on Σ reduces, after gauge fixing, to an integral over Teichmüller space. We may now couple one endpoint of an open string to the boundary at r = a with a charge e 1 , and the other end at r = 1 with charge e 2 , so that the one-loop action in the conformal gauge reads Here we shall consider only the case of neutral strings, e 1 = e 2 = e. 3 Charged strings will be dealt with later on. Again, by using the method of images the Neumann function on the annulus is found to be given by the infinite series which satisfies the usual equation of motion and the Neumann boundary conditions . At the worldsheet boundaries where z k = e iϑ k , k = 1, 2, the annulus Green's function (2.17) can be written as where k, l = 1, 2 and G n is the 2 × 2 matrix (2.20) The function (2.18) is easy to invert and proceeding as before the one-loop effective action may thereby be calculated to be [25,1] where is the usual zero field vacuum energy for the annulus in the bosonic critical dimension d = 26, and η(τ ) is the Dedekind function. The partition function (2.22) contains the contribution from the two conformal ghost fields which do not couple to the external field F µν . This result may be straightforwardly extended to fermionic strings by using the usual coupling of a spinor particle to an electromagnetic field and by using anti-periodic Fourier series expansions on ∂Σ for the fermion fields and the corresponding string propagator. At tree level, the only effect of supersymmetry is to cancel the tachyonic divergence that arises in (2.11) [57]. The final result is again the Born-Infeld action (2.13). The extension to non-abelian gauge fields is also straightforward [55] and yields the effective nonabelian Born-Infeld action for open strings whose endpoints transform in the fundamental representation of the gauge group. The leading term in the α ′ expansion is the usual Yang-Mills Lagrangian for the non-abelian gauge field. Demanding spacetime supersymmetry then leads to the usual low-energy effective field theory description in terms of maximally supersymmetric Yang-Mills theory in d = 10 spacetime dimensions (the superstring critical dimension). B. Open string spectrum In this subsection we will describe the spectrum of open strings in a constant background electromagnetic field in second quantization using the operator formalism [1]. We will concentrate again on bosonic strings, as we are merely interested here in some of the basic qualitative features of the spectrum. We will assume that the string worldsheet Σ is now an infinite strip with coordinates (τ, σ), where τ ∈ R and σ ∈ [0, 1] (This surface is conformally equivalent to the disc). The Euclidean action is given by , (2.23) and in the worldsheet canonical formalism we regard τ as the time coordinate and σ as the space coordinate (so that Σ now has Minkowski signature). Varying (2.23) gives the usual wave equation ✷ x µ = 0 along with the mixed Neumann-Dirichlet boundary conditions We will again use Lorentz invariance to skew diagonalize the real-valued antisymmetric tensor F µν . Since the skew blocks are independent, it suffices to concentrate on only one of them, and so we assume that the only non-vanishing component of the field strength tensor is F 01 = −F 10 = F . In this plane of the field, we introduce the complex target space coordinates x ± = 1 √ 2 (x 0 ± ix 1 ), in terms of which the boundary conditions (2.24) become along with the standard free open string Neumann boundary conditions in all of the directions transverse to the 0-1 plane. We will now write down mode expansions which solve the equations of motion and satisfy the requisite boundary conditions (2.25). Since the only modification from the usual free string case occurs for the harmonic string coordinates in the 0-1 plane, we will focus our attention on their contributions. For this, it is necessary to treat neutral and charged open strings separately. As we will see in the following, there are drastic differences at both a qualitative and analytic level between the two cases. Let us note, however, that unitarity of the open string theory always requires the existence of both charged and neutral strings in the spectrum [3]. Consider a string scattering amplitude with the given charges e 1 , e 2 at the endpoints. An amplitude with an even number of external legs can be sliced in many different ways into intermediate states. Some of these intermediate states will consist of open strings with either the charge e 1 or e 2 at both of its ends. An amplitude with an odd number of external legs necessarily involves at least one neutral string state in the scattering process. Therefore, any amplitude should be summed over all charges in the decomposition of the fundamental representation of the Chan-Paton gauge group under the embedding of U(1) induced by the background electromagnetic field. Neutral strings Let us begin with the case where the total charge of the open string vanishes, e 1 = e 2 = e. In this case there is the freedom to add to the coordinates x ± terms proportional to τ ∓ 2π 2 iα ′ eσ, which satisfy the boundary conditions (2.25) when e 1 − e 2 = 0. The mode expansions in the 0-1 plane can thereby be written as with (y ± ) † = y ∓ and (q ± ) † = q ∓ . The expansions (2.26) are defined in terms of an orthonormal system of oscillation modes [1,14,45] which solves the variational problem for the action (2.16) on the infinite strip and which diagonalizes it. The canonical momenta conjugate to the fields x ± are given by p ± = ∂ τ x ± and they lead to the canonical commutation relations of the quantum string theory in the usual way. Because of the Born-Infeld factor in the denominator of the first line of (2.26), one finds that the zero mode positions y ± and momenta q ± obey canonical (Heisenberg) commutation relations, respectively, and are mutually commutative otherwise. The Fourier modes obey the Heisenberg-Weyl algebra [a ± m , (a ∓ n ) † ] = n δ nm . They therefore satisfy the same harmonic oscillator commutation relations that they would in the absence of the external field. The Hamiltonian density is (∂ τ x + + ∂ σ x + )(∂ τ x − + ∂ σ x − ) which leads to the total worldsheet Hamiltonian (2.27) in the 0-1 plane, with q + q − = 1 2 (q 2 1 +q 2 2 ). We conclude that the spectrum of a neutral open string is not affected by the electromagnetic field. However, as we saw in the previous subsection, the vacuum-to-vacuum amplitude is modified because the usual Born-Infeld factor appears in the massshell condition [1]. Charged strings When the total charge of the string is nonvanishing, the entire structure of the external field problem is different. The string fields no longer have integer oscillator modes, and the zero modes change completely. In particular, there is no function linear in τ and σ which can satisfy the boundary conditions (2.25) when e 1 −e 2 = 0. The mode expansion in this case is given by (2.29) We will assume in this subsection that e 1 > e 2 so that α > 0. The normal mode functions in (2.28) again diagonalize the action (2.16), and solve the wave equation and the boundary conditions (2.25) [1,14,45]. Note that since their integrals are non-zero, the y ± cannot be identified with the center of mass coordinates of the open strings, in contrast to the neutral case. Notice also the appearence of the extra modes a ± 0 compared to the α = 0 case, which by reality are required to be Hermitian operators. Canonical quantization now identifies the quantum commutators (2.31) The drastic change in the zero mode structure for charged strings is apparent in the commutation relation (2.31), which is ill-defined in the neutral string limit e 1 = e 2 . The total worldsheet Hamiltonian in the 0-1 plane can be worked out to be (2.32) The normal ordering constant in (2.32) depends on the (arbitrary) choice of a + 0 as an annihilation operator and is required to put the Virasoro algebra in standard form [1,45]. We see therefore that the external electromagnetic field has a drastic effect on the spectrum of charged strings. It shifts the oscillation frequencies by amounts ±α, it modifies the commutation relations of the zero modes, and it changes the zero point energy. Furthermore, the open string momentum operators q ± no longer appear in the mode expansions, while there are extra Fourier operators a ± 0 which create and annihilate quanta of frequency α. In fact, the contribution from the coordinates in the 0-1 plane is formally identical to that of a twisted unprojected sector of an orbifold string theory with twist angle α. This orbifold analogy provides a computationally convenient characterization of the external field problem. Normally, one would take the quantum states to be eigenstates of definite momentum. However, when e 1 = e 2 , it is instead the zero mode operators y ± that commute with the Hamiltonian L 0 , and so we may take the states to be eigenstates of y + , for example. Note that the operator y − is, according to (2.31), a conjugate momentum operator for y + . Since L 0 does not depend on y ± , there is an infinite degeneracy in the spectrum. In fact, the present physical situation is identical to that of a charged particle moving in the plane under the influence of a perpendicular uniform magnetic field. The states form equally spaced Landau levels of infinite degeneracy, with the energy difference between consecutive levels proportional to (e 1 −e 2 )F . The operators a ± 0 move the string from one Landau level to another, and their frequency separation (2.29) is proportional to the quantity (e 1 − e 2 )F when it is sufficiently small, i.e. in the weak-field limit α = 2α ′ (e 1 − e 2 )F + O(F 3 ) ≪ 1. Deviations from the field theoretic result at strong fields F come from the non-minimality of the electromagnetic string coupling and are parametrized by the non-linear function α of the field. Excited states of the open string are then obtained by acting on the ground state wavefunctions with oscillator creation operators. At the first excited level, there are the states (a + 1 ) † |y + with tachyonic mass − 1 2 α(1 + α), and the states (a − 1 ) † |y + with mass 1 2 α(3 + α). This is reminescent of the situation that occurs in Yang-Mills theory in the presence of a chromomagnetic field condensate, whereby one gluonic polarization becomes unstable due to its tachyonic energy and the other becomes massive [46]. 4 In fact, as we shall see in the following, the charged string system possesses many instabilities. Only the neutral open string makes sense both physically and analytically. C. The Schwinger mechanism In this subsection we will compute the oneloop vacuum energy for charged strings using the operator formalism of the previous subsection and elucidate somewhat on the instabilities that we have thus far encountered. In particular, we will examine the instability of the string vacuum in a purely electric background [14,8]. This can be obtained from the calculations of the previous subsection by the analytical continuations F = iE and α = −iǫ corresponding to Wick rotations of 4 In contrast to the usual bosonic string tachyon state, this instability is not removed by supersymmetry [24]. both the worldsheet and target space time coordinate to Minkowski signature. The vacuum energy may be computed as the logarithm of the partition function det(L 0 − 1) −1/2 of the underlying free conformal field theory σ-model, where is the total Hamiltonian comprised of the contributions from the fields transverse to the plane of the electric field, those parallel to the 0-1 plane, and the conformal ghost fields. The annulus amplitude is thereby given as where V d is the volume of spacetime and tr (e 1 ,e 2 ) denotes the trace over all string states in the (e 1 , e 2 ) charge sector. The total annulus amplitude is a sum over all allowed endpoint charges. The trace (2.33) is straightforward to evaluate by using the proper time representation 34) and the fact that for any set of oscillator operators a n obeying Heisenberg-Weyl commutation relations there is the formula where we have used a basis of all possible multiparticle states. For the transverse degrees of freedom the oscillator traces are accompanied by d−2 Gaussian momentum integrals, coming from the analogs of the first term in (2.27), and an integration over the canonically conjugate zero modes y ⊥ of the string fields which produces a volume factor V d−2 . These are also multiplied by a factor (2π) d−2 which is the density of quantum phase space states. For the fields along the 0-1 plane, we can use the identity (2.35) with the appropriate shifts of oscillation frequencies given in (2.30). There is also an integration over the zero modes y ± which contribute, according to (2.31), a quantum state density factor α ′ π E (e 1 − e 2 ), along with the volume of the 0-1 plane. By incorporating the ghost contributions and putting all of these results together we arrive finally at is a field dependent correction factor. Here Θ a (ν|τ ) denote the standard Jacobi-Erderlyi theta-functions and Θ ′ 1 (ν|τ ) = ∂ ∂ν Θ 1 (ν|τ ). In the zero field limit the amplitude (2.36) reduces to the expected result (2.22), since C A (t, 0) = 1. It gives the modification of the neutral string effective action (2.21) to the charged case. In the limit , and the correction factor (2.37) takes the simple Thus, for neutral strings the annulus amplitude (2.36) is proportional to the square of the Born-Infeld Lagrangian for this case, as in (2.21). The most interesting feature of the vacuum energy (2.36) is that it is imaginary. The thetafunction appearing in the denominator of (2.37) contains a trigonometric function, Θ 1 ( ǫt 2 | it 2 ) ∝ sin( πtǫ 2 ), and so the function C A (t, E) has simple poles on the positive t-axis at t = 2k/|ǫ|, k = 1, 2, . . .. The amplitude thereby acquires an imaginary part given by the sum of the residues at the poles times a factor of π, since, as dictated by the proper definition of the Feynman propagator, the contour of integration in the complex t-plane should pass to the right of all poles. What this quantum instability represents is the spontaneous creation of charged strings from the vacuum [14,8], in analogy to the instability of the vacuum state in quantum electrodynamics [50]. By computing the corresponding residues of the function (2.37), the total rate of pair production is found to be given by where the second sum runs through all physical open string states of mass M S (ǫ) which may be computed from the generating function (2.39) Note that neutral string states do not contribute to the pair production rate, as expected, and indeed the neutral string vacuum energy (2.21,2.22) is real-valued. The expression (2.38) represents the stringy modification of the classic Schwinger probability amplitude for pair creation of charged particles in a uniform external electric field E [50]. In that case the probability per unit volume and unit time is given by where Q, J and M are the charge, spin and mass of the created particles. In this quantum field theory calculation the imaginary part of the vacuum energy comes from the determinant det(−D 2 A ) −1/2 of the massive gauge-covariant Dirac operator D A , which at tree-level would produce the result πM −1 QE sin(πM −1 QE) . In fact, the result (2.38) coincides with (2.40) with Q = 2α ′ (e 1 − e 2 ) in the weak-field limit in d = 4 dimensions, since a particle-antiparticle pair of spin J has 2(2J + 1) physical states. However, in contrast to the field theory case, the string theory deteriorates at strong external fields. Since ǫ → ∞ as E → E c = T s /e 1 , the total rate for pair production diverges at the critical electric field [8]. Thus the classical instability of the string vacuum state in an electric field can also be seen at the quantum level. At this critical value of the external field, the string tension can no longer stop charged strings from nucleating out of the vacuum. In fact, this limiting instability also occurs for neutral strings. If we concentrate on only the first line of (2.26) (the zero mode contributions), then we see that the open string can be thought of as a rod, of length proportional to E q, which behaves like an electric dipole whose ends carry equal and opposite charges. When an open string is stretched along the direction of the background electric field, the field reduces its energy, and at E = E c the energy stored in the tension of the string is balanced by the electric energy of the stretched string. For E > E c , virtual strings can materialize out of the vacuum, stretch to infinity and destabilize the ground state. In fact, from the first line of (2.26) we see that for fixed worldsheet time τ the two endpoints of the string are not at the same value of x + , but they are always spacelike separated. As the electric field becomes critical the two ends at fixed τ become lightlike separated. Of course, in the genuine Type I theory that these considerations really pertain to, one should add to the annulus amplitude the contribution from its non-orientable counterpart, the Möbius strip diagram. This is straightforward to do and the role of the Möbius amplitude is to project out the reflection-odd, neutral oriented string states [8]. 5 One should also, for unitarity reasons, consider the contributions from the one-loop closed string diagrams, i.e. the torus and the Klein bottle. Since closed string states do not couple to the external field, their amplitudes are the same as in the zero field limit. The four contributions to the total vacuum energy now have an elegant interpretation in terms of the worldsheet orbifold construction that we mentioned earlier, whereby the torus and annulus diagrams give the contributions of untwisted and twisted sectors, respectively, while the addition of the Klein bottle and Möbius strip diagrams takes care of the projections onto states which are even under the action of the orbifold group. The extension of the calculation to open superstrings is also straightforward and again one easily recovers the Schwinger formula (2.40) in the weak-field limit [8,56,57]. III. THERMAL ENSEMBLES In this section we will describe some properties of strings in background electromagnetic fields at finite temperature. For this, we are interested in computing the thermodynamic partition function where β = 1/k B T with k B the Boltzmann constant and T the temperature. Temperature represents another explicit supersymmetry breaking mechanism and it leads to a variety of novel effects in string theory. At the forefront of these exotic features is the influence of the density of single particle states on the thermodynamic properties of the string gas. The number of states at level N grows exponentially as e 4π √ N , which is so rapid that the thermodynamic partition function (3.1) of the free string gas converges only for sufficiently small temperatures T < T H , where is known as the Hagedorn temperature [31]. Generally, models with an exponentially rising density of states exhibit non-extensive thermodynamic quantities and a pair of such systems can never attain thermal equilibrium. However, in string theory the Hagedorn temperature is not a limiting temperature, because it requires a finite amount of energy to reach it in the canonical ensemble. Rather, it is associated with a phase transition, analogous to the deconfinement transition that occurs in Yang-Mills theory. The Hagedorn temperature T H is the critical point at which infrared divergences emerge due to a closed string state becoming massless [34,48,6]. The Hagedorn transition may therefore be associated with the appearence of tachyonic winding modes. In the following we will examine how this picture is affected by the presence of background fields. Although the thermodynamic ensemble of superstrings is interesting in its own right [39], the inclusion of electromagnetic fields will allow us in the next section to map the free string gas onto a system of D-branes. Thermal states of superstrings in electromagnetic fields thereby correspond to non-extremal states of D-branes in supergravity which have a natural Hawking radiation and entropy. They are therefore relevant to the microscopic description of black holes in string theory. Since the string theory universally contains gravity, the system destabilizes at finite energy density in the thermodynamic limit. This is due to the Jeans instability which occurs because a relativistic thermal ensemble at sufficiently large volume reaches its Schwarzschild radius and collapses into a black hole [6]. In the path integral approach to finite temperature string theory, the spacetime is taken as Euclidean space with time x 0 compactified on a circle of circumference β. Temperature affects the string gas because the string can wrap around the compact time direction with a given winding number n ∈ Z, i.e. x 0 (r, ϑ + 2π) = x 0 (r, ϑ) + nβ. This affects only the zero modes of the bosonic string embedding field x 0 and can be incorporated by adding a term nβ 2π ϑ to its mode expansion. In string perturbation theory, the disc amplitude (2.13) is unmodified at finite temperature, because the disc worldsheet cannot wrap the cylindrical target space and so cannot distinguish between a compactified and an uncompactified spacetime. The first corrections due to temperature appear in the annulus amplitude. We can evaluate the thermodynamic free energy F = − ln(Z)/β as before by computing the Polyakov path integral for the action (2.16) and enforcing the periodicity constraint in Euclidean time via the substitution We then sum the path integral over all temperature winding modes n ∈ Z. For a uniform electromagnetic field, there are two commonly used gauge choices, the static gauge Here the vector F denotes the temporal components of the Euclidean field strength tensor, F i = F 0i , i = 1, . . . , d − 1, which is related to the electric field E in Minkowski space by F = i E. In the temporal gauge, the gauge potential is only periodic in Euclidean time up to a gauge transformation, and in this case it is necessary to augment the usual coupling of the edge of a charged open string to the gauge field by adding a generalized Wu-Yang term [3] in order to compensate the gauge transformation. Here we shall choose the periodic static gauge field configuration. On substituting (3.3) into (2.16) the action then reads It is clear from (3.4) that electric and magnetic backgrounds contribute very differently at finite temperature, and so we shall analyse their couplings separately. A. Magnetic fields Let us begin with the purely magnetic case and set F = 0 in (3.4). Let f ℓ , ℓ = 1, . . . , d−2 2 , be the skew-eigenvalues of the magnetic flux tensor F ij . The zero temperature annulus amplitude when F ij has only a single non-vanishing skew-eigenvalue f ℓ is given by (2.36,2.37) with E = −if ℓ and ǫ = iα(f ℓ ), and it is real-valued. Since the mode expansions of the string fields are the same as before, it follows that the only effect of finite temperature comes from the constant term in the first line of (3.4). By summing the path integral over all thermal winding modes, this inserts into the Teichmüller integration defining the annulus amplitude the infinite series The one-loop free energy per unit volume of the string gas in the magnetic background is thereby given as We will now examine the convergence properties of the Teichmüller integral (3.6). The open string ultraviolet behaviour is determined by the t → 0 region of moduli space. In this limit the theta functions appearing in (3.6,2.37) have the asymptotics From these behaviours it follows that the integral (3.6) converges in the region t → 0 provided that β > 1/k B T H , where T H is the Hagedorn temperature (3.2). 6 Note that the overall asymptotics are completely independent of the external field, and we therefore conclude that the presence of a magnetic field does not change the value of the Hagedorn temperature of the free open string gas. The open string infrared behaviour, on the other hand, comes from the t → ∞ region of moduli space. In this limit the temperature dependence of (3.6) disappears, Θ 3 (0| iβ 2 t 32π 2 α ′ ) ∼ 1, and the conditions for convergence of the integral are the same as at zero temperature. One encounters the infrared magnetic instability that was described in section II.B.2 [56]. This instability in the thermodynamic free energy is also present in supersymmetric Yang-Mills theory at both zero and finite temperature. It is straightforward to repeat the analysis for the fermionic string (where there is no zero temperature tachyon mode). As spacetime bosons are required to obey periodic boundary conditions along the Euclidean time circle and spacetime fermions to obey anti-periodic ones, supersymmetry is explicitly broken and the GSO projection must be accordingly modified. At the one-loop level this may be achieved by inserting an extra weighting (−1) n in the sum over temperature winding numbers for the (−, +) spin structure in 6 Of course, the bosonic string theory has a tachyonic instability at any temperature. The Hagedorn temperature is defined in this case as the temperature at which the one-loop free energy of the bosonic string gas diverges even if the contribution from the usual tachyon mode is ignored. the Neveu-Schwarz sector of the worldsheet theory [6]. Then one can compute that the instabilities encountered above persist for superstrings [56]. B. Electric fields Now let us consider the case of a purely electric background and set F ij = 0 in (3.4). From the second line of (3.4) we see that there is now a non-trivial coupling between the electric field and the temperature winding modes. This coupling has several dramatic effects on the thermal ensemble. The most glaring one is that it prevents the formation of an equilibrium distribution of charged strings in the electric field [3]. To see this, we consider the zero modes x µ = x µ 0 of the string embedding fields on the annulus. In the absence of an electric field, the action is independent of them and integrating them out in the path integral produces a volume factor βV d−1 . In the present case, however, they contribute the quantity This result shows that thermal states of the string are stable only either for neutral strings or in the absence of the external field. All states except the ground state n = 0 contain excitations of charged particles and therefore have infinite energy. In fact, because of the Schwinger mechanism that we described in section II.C, even the ground state is unstable. The breaking of translational invariance forbids an equilibrium state of charged strings in a constant background electric field. In what follows we shall describe some origins of this instability. Neutral strings It is natural to consider the neutral string case, e 1 = e 2 = e. In that case the zero modes x 0 disappear from the action (3.4), but the second line still contributes a linear term to the Gaussian form for the oscillatory modes ξ(ϑ) of the string fields. On completing the square, this adds an extra term β 2 n 2 tα ′ e 2 E 2 /8 to the argument of the exponential in the infinite series (3.5), where E = −i| F |. The modification of the annulus amplitude (2.21,2.22) to finite temperature is therefore the free energy By repeating the asymptotic analysis of the previous subsection, we find that the Teichmüller integral (3.8) converges in the open string ultraviolet Thus, in contrast to the magnetic case, an electric background modifies the Hagedorn temperature (3.2) of the neutral string gas by the familiar Born-Infeld Lagrangian [23]. This result was to be expected because, unlike magnetic fields, electric fields couple to the temporal coordinate and therefore scale the momentum of the strings. In turn, they rescale the proper time variable t. Note that the Hagedorn temperature (3.9) decreases with increasing electric field and vanishes at the critical value E = E c . As is apparent in second quantization [45,23] (see Eq. (2.26)), the field dependent rescaling factor originates as a modification of the string tension T s = 1/2πα ′ , which determines the critical temperature (3.2), to an effective tension which vanishes at the critical electric field. This modification of the tension is the reason why the thermodynamic properties of the neutral string gas are altered by the electric background. Again the same conclusions are reached for the full superstring free energy [56]. Charged strings Restricting the spectrum to neutral open strings does not completely cure the electric field instability, because we have to sum over all allowed neutral and charged string states. We will now examine the reasons why finite temperature string theory forbids constant electric fields. This instability can in fact be seen at the field theoretical level. The coordinate space diagonal elements of the (un-normalized) thermal density matrix in quantum electrodynamics for a charged (scalar) particle of mass M and charge Q in a uniform electric field E is given by the proper time integral [40,3] ρ( y, y; β) = β 2 ∞ 0 dτ τ This result shows that the free energy of the system is trivial, in that the integration of (3.11) over y picks up only the ground state of winding number n = 0 (due to the occurence of the theta-function Θ 3 ). Note that the density matrix (3.11) is complex-valued and its imaginary part gives the Schwinger probability amplitude (2.40). Naively, the translational symmetry which is broken by the external electric field could be restored by choosing the temporal gauge for the gauge potential rather than the static gauge. However, this gauge choice ruins the global gauge invariance of the system. The gauge potential in this case is not a function on spacetime because it is multi-valued under periodic shifts around the temperature circle, and it can only be properly defined with respect to a local covering of the thermal direction. Requiring that the theory be independent of the choice of covering requires the addition of generalized Wu-Yang terms to the action (The mathematical details of this construction can be found in [3]). These terms restore gauge invariance, but they also reinstate precisely the same y-dependent factor in (3.11). Therefore, the gauge-invariant free energy remains trivial. For a thermal state of charged strings, the constraint (3.7) forces us to take F = 0. This selects the constant gauge field configuration A 0 (x) = a 0 . Although this field is pure gauge, it can only be removed by a singular gauge transformation. Therefore, charged states will still depend on a 0 , or equivalently the canonical, gauge invariant momentum of the open strings depends on a 0 . The gauge field background cannot be removed because there is a non-trivial gauge invariant holonomy e i(e 2 −e 1 )na 0 which arises from the boundary terms in the action (2.16) in the sector of temperature winding number n. This holonomy is simply the Polyakov loop operator for the annular geometry. We can therefore study the free energy for this constant gauge field configuration and compute the effective action for charged strings in a generic, time-independent background gauge field a 0 ( x 0 ). This yields the free energy that is required to introduce a heavy charged particle into the system and thereby gives information about confinement, which is the pertinent property of the Hagedorn transition. The action is now given by adding to the first line of (3.4) the term i(e 2 −e 1 )na 0 . After summing over all n ∈ Z, the appropriate modification of the one-loop vacuum energy (2.22) is given by where we have made a modular transformation t = 1/s (mapping the one-loop open string annulus diagram onto the tree-level closed string cylinder diagram) and used the Poisson resummation formula η(− 1 τ ) = √ −iτ η(τ ). The integral (3.12) can be evaluated explicitly by expanding the Dedekind function using the formula where d b N is the degeneracy of bosonic string states at level N. For the first two levels we have By expanding the thetafunction Θ 3 in an infinite series we can thereby perform the integral (3.12) and arrive at where K 1 (z) is the irregular modified Bessel function of order 1. The N = 0 contribution to (3.14) of course diverges because of the tachyonic instability. The next contribution is from the level N = 1, corresponding to the 26-dimensional Yang-Mills multiplet, which is well-defined. From the asymptotic expansion K 1 (z) ∼ e −z π/2z for |z| → ∞, it follows that its contribution is suppressed in the low-temperature limit β ≫ √ α ′ by terms of order e −β/ √ α ′ . Nevertheless, this calculation illustrates the general feature whose instabilities are cured by computing the one-loop free energy of the superstring gas [3]. Then the lowest N = 0 level yields a finite contribution in the low temperature limit which corresponds to the ten dimensional Yang-Mills supermultiplet. By including the tree-level Born-Infeld actions for the disc amplitudes of the charged string endpoints, we arrive at the total (normalized) effective action for the gauge field a 0 ( x 0 ) up to one-loop order in the form where Thus the low temperature modification of the Born-Infeld action is a generalization of the sine-Gordon theory representation of the classical Coulomb gas where the standard kinetic term for the gauge field is replaced by the Born-Infeld Lagrangian. The main feature of this field theory is that the linearized equation of motion for the minima of the free energy (3.15) takes the form (−∇ 2 + µ 2 )a 0 ( x 0 ) = 0, which has exponentially decaying solutions a 0 ( In this approximation the constant (3.16) appears as a mass term for the gauge field in (3.15) and acts like a Debye screening mass. It is clear for this reason that constant electric fields cannot be extrema of the effective action, i.e. the existence of uniform electric fields is inconsistent with the existence of a Debye mass. Note that (3.16) vanishes for neutral string states. As expected, the Debye mass (3.16) is the same as the one that would arise in ordinary ten dimensional Yang-Mills theory. The contributions from massive string states are exponentially suppressed by terms of order e −β/ √ α ′ . Since µ 2 ∝ T /T H (0), for temperatures well below the critical Hagedorn temperature the Debye mass is small and electric fields become more and more long-ranged. Stringy effects essentially only play a role at temperatures near the Hagedorn transition. Furthermore, the Born-Infeld generalization (3.15) of the sine-Gordon model has solitons which generalize the solitary waves of the plasma phase of the ordinary Coulomb gas [3]. In gauge field theories these solitons exist as Z N domain walls. This characterization could prove useful for other aspects of the Hagedorn transition in background fields. IV. D-BRANE DYNAMICS T -duality maps free open strings to open strings whose endpoints are attached to D-branes. It replaces the quantities ∂ a x i by iǫ ab ∂ b x i and Neumann boundary conditions for the string embedding fields x i with Dirichlet ones. The results of the previous sections may be interpreted directly as the appropriate contributions to the tension of a D9-brane with some background field distribution. T -dualizing these expressions in 9 − p of the spacetime directions is then tantamount in string perturbation theory to adding an extra open string mass factor t −p/2 e −r 2 t/2πα ′ to the Teichmüller integration measure, reflecting the Dirichlet nature of the 9 − p transverse directions, where r is the separation between parallel branes. If the background field is constant, then its components which do not lie along the Dpbrane can be gauged away. The open string annulus amplitude then becomes the one-loop effective potential between two Dp-branes with generic background fields on each brane. Such a configuration describes a boundary condensate of the stretched open strings between the branes in the electromagnetic field. By keeping the transverse electric field component non-vanishing, we may also describe the interaction potential between moving branes. Much of the analysis we have made thus far for the problem of open strings in electromagnetic fields has dual analogs for Dbrane dynamics. However, in the D-brane picture many of the stringy effects that we have unveiled for the external field problem have very natural dynamical explanations. In this section we will use the external field problem to describe the dynamics of D-branes. We will restrict our attention to D0-branes for simplicity. Under T -duality, electric fields map onto the trajectories of D-branes as follows. The coupling exp ie A(x 0 ) · ∂ x of a time-varying, spatially constant electric field E = ∂ 0 A to a boundary that carries charge e is replaced by the vertex operator exp 1 2πα ′ y(x 0 )·∂ ⊥ x for a moving D0-brane [21,38] travelling with velocity v = ∂ 0 y = 2πα ′ e E. The β-function equations for this coupling can be interpreted as the classical equations of motion for the 0-brane. Constant electric field thereby corresponds to uniform motion of the branes while a neutral string would represent a pair of branes moving with zero relative velocity. In string perturbation theory the electric field and moving Dbrane problems are identical because of the perturbative duality between Neumann and Dirichlet boundary conditions on the string fields [38]. In the former case the effective dynamics for a slowly varying electric field is governed by the Born-Infeld action. Under T -duality this action simply maps onto the usual action for a relativistic point particle, where T 0 = 1/g s √ α ′ is the 0-brane tension, i.e. the BPS mass of the D-particles. The Born-Infeld action is the non-trivial result of a resummation of all stringy order α ′ corrections, and among other things it leads to a limiting value E c = (2πα ′ e) −1 of the external electric field, above which the system becomes unstable. In the dual picture, this is simply a consequence of the laws of relativistic particle mechanics for the 0-brane, with the critical velocity corresponding to the speed of light. At this velocity, we can make a large boost to bring the brane to rest, so that in the T -dual picture of the original string theory this amounts to boosting to large momentum. Thus the string theory with the electric background near the critical limit is equivalent to the string theory in the infinite momentum frame, or equivalently in the formalism of discrete light-cone quantization. A planar static D-brane is a BPS defect that preserves half of the original spacetime supersymmetries. D-branes in supersymmetric configurations exert no static force on each other because supersymmetry ensures that the Casimir energy of the stretched open strings vanishes [47]. Uniform motion of a single D-brane cannot have any non-trivial consequences, because it depends on the choice of an inertial frame. However, setting a pair of branes in relative motion breaks the supersymmetry of the system and a velocity dependent potential appears between them. Another non-BPS configuration of D-branes is that which lives in a thermal state of the theory. In the supergravity picture, non-extremal branes have a natural temperature and entropy. The Hawking radiation of a certain class of near extremal black holes with Ramond-Ramond charge may be interpreted in terms of the emission of closed string modes by a thermal state of Dbranes [54,15,42]. It has been suggested [9] that the gravitational Hawking temperature and the temperature of a Boltzmann gas of D0-branes should be identified. This is based on the conjecture [41] that the black hole free energy resulting from classical supergravity is described accurately by the large 't Hooft coupling limit of the supersymmetric matrix quantum mechanics describing the dynamics of N D-particles [10]. In this model, which comes from the leading Yang-Mills reduction of the (non-abelian) Born-Infeld action, the brane coordinate fields are N ×N Hermitian matrices whose eigenvalues represent the collective coordinates of the D-particles, while the off-diagonal fluctuations represent the Higgs fields corresponding to short open string excitations between the parallel branes [58]. The breaking of supersymmetry is then explicit in the fact that the thermal partition function is computed with periodic temporal boundary conditions for the boson fields and anti-periodic ones for the fermion fields. The model accurately describes the leading velocity corrections to the tree-level action (4.1) [22], so it is natural to use it to describe the thermodynamics of moving D-branes. In this section we will describe some new calculations which compute these corrections to the static D-brane amplitudes. A. Velocity dependent forces In this subsection we will present a novel derivation, using the Polyakov path integral, of the known formula [7,12] for the one-loop vacuum energy of D-branes moving with uniform velocity. Path integral treatments of D-branes can also be found in [32]. Consider a D-string in the presence of a boundary condensate of constant electric field E in Type IIB superstring theory. The one-loop correction to the effective Born-Infeld action at tree-level comes from the annulus string diagram which describes two D1-branes with an open string stretching between them. Neumann boundary conditions are taken along the axes 0,1, and Dirichlet ones along the transverse axes 2, . . . , d − 1. These latter directions will be labelled collectively in what follows by the superscript ⊥. The open string parametrization is 0 ≤ σ ≤ 1, 0 ≤ τ ≤ t, where ln t is the Teichmüller parameter of the annulus. The string carries charges e σ at the worldsheet boundaries σ = 0, 1. The Euclidean action is analogous to (2.23), where the bulk action S is given by Bosonic case If the coordinate x 1 ≡ x 1 N is compactified on a circle of circumference L N , then we can make a T -duality transformation along the 1-axis which interchanges the Neumann and Dirichlet boundary conditions for the open string. The new coordinate x 1 D takes values on a dual circle of circum- The boundary conditions (4.4) can be rewritten as where we have simply denoted x 1 ≡ x 1 D , and are the analytic continuations of the velocities of the two string endpoints from Minkowski to Euclidean space arising from Wick rotation of both x 0 and τ . In the case of static D0-branes (v σ = 0), the mode expansions of the string fields which diagonalize the action and solve the boundary conditions are well-known to be given by [43] x 0 a mn e 2πimτ /t cos πnσ , where a −mn = a * mn , b −mn = b * mn , and x ⊥ −mn = (x ⊥ mn ) * . The mode expansions which solve the boundary conditions (4.5) may then be obtained by rotating the fields (4.7) through angle πα 0 + πασ in the 0-1 plane to get x 0 (τ, σ) = cos (πα 0 + πασ) x 0 (0) (τ, σ) + sin (πα 0 + πασ) x 1 (0) (τ, σ) , The fields (4.8) obey the boundary conditions (4.5) with the identifications of the rotation angles as the rapidities of the Euclidean boost, The mode expansion (4.8) diagonalizes the action (4.3) which enters the Euclidean path integral with the boundary conditions (4.5). Here an extra factor of 2 has been inserted to account for the symmetry under interchange of the two endpoints of an oriented string [47]. Let us begin by evaluating the contributions from the non-zero modes of the fields x 0 and x 1 to the path integral (4.10). The modes with indices (m, n) and (m ′ , n ′ ) are orthogonal for m ′ = −m, and so their contribution to the action (4.3) is where we have used the fact that the modes with either sine or cosine functions are orthogonal over the semi-period of σ, while the cross terms which mix sine and cosine functions do not occur. Evaluating the resulting Gaussian functional integral in (4.10) over these non-zero modes produces the determinant m,n where we have used the formula In arriving at (4.12) we have ignored an overall constant factor which may be set equal to unity by using zeta-function regularization (2.12) of the infinite product. Analogously, the contribution to (4.3) from the zero modes of the fields x 0 and x 1 is The corresponding Gaussian functional integral gives For any non-vanishing α the distance y 1 1 − y 1 0 between the ends of the string along the direction of motion can be absorbed into the quantity a 00 and disappears from the final result, as it should. If α = 0 the integral over a 00 produces the volume along the 0-axis and the last term on the righthand side of Eq. (4.14) remains just as for the transverse directions which contribute the usual quantity We now multiply the three quantities (4.12), (4.15) and (4.16) together, take into account the contributions from the conformal ghost fields, and use the identity (4.17) In this way we find that the vacuum energy functional (4.10) is given by in the critical dimension d = 26. This coincides with the result of Refs. [7,12] for the bosonic string which is expressed in terms of the Minkowski space rapidity ǫ = iα. Note that, in the present derivation, we have simply used the action (4.3) without explicitly adding boundary terms to correctly reproduce the bosonic amplitude for moving D0-branes. RNS formulation We will now turn to the superstring vacuum amplitude. The open string is again parametrized by the worldsheet coordinates (τ, σ). On the annulus there are four standard spin structures, (+, −), (+, +) in the R-sector (associated with spacetime fermions) and (−, −), (−, +) in the NS-sector (associated with spacetime bosons), which represent the periodicities of the worldsheet fermion fields with respect to (τ, σ). The GSO projection dictates that the physical amplitude is obtained by summing over the contributions from the four spin structures. The fermionic part of the superstring action is given by where ψ µ andψ µ are Grassmann-valued fields which transform as SO (8) vectors. In the absence of an external field, or for static D-branes, the fermion fields obey the standard superstring boundary conditions where a = 0 for the R-sector and a = 1 for the NS-sector. The mode expansions in the (±, −) sectors are given by while in the (±, +) sectors they are For moving branes we rotate the fields ψ µ (0) with respect toψ µ (0) by the orthogonal matrix [29] M µν (σ) = e (πα 0 +πασ)Σ 01 µν (4.23) where Σ ρλ µν = δ ρ µ δ λ ν − δ λ µ δ ρ ν are the generators of SO(8) rotations in the vector representation and the rapidities α are given by Eq. (4.9). Since where here only the 0 and 1 components of the fields are rotated since the boost is in the 0-1 plane. The mode expansions (4.26) diagonalize the action (4.19) in each of the four worldsheet sectors. We shall analyse the contributions first from each sector separately. (+, −) sector : Using (4.21) with a = 0 and substituting (4.26) into (4.19) leads to the action The functional Gaussian integral over the Grassmann variables ψ µ mn thereby produces the determinant where we have used the formula The functional Gaussian integral over the Grassmann variables ψ µ mn gives (4.31) (−, +) sector : Finally, by setting a = 1 in (4.22) the action reads The functional Gaussian integral over the Grassmann variables ψ µ mn yields We now take into account the contributions from the conformal anti-ghost fields in each of the three non-vanishing sectors above, sum over the spin structures with weight 1 2 , and multiply by the bosonic amplitude in the superstring critical dimension d = 10. In this way we arrive at the known formula [7] for the superstring vacuum energy functional, (4.34) By using the formula which is a consequence of the Riemann identity for Jacobi theta-functions, we arrive at the result of Refs. [29,12] after a modular transformation t → 1/t of the annular Teichmüller parameter. B. D-brane scattering After analytical continuation to Minkowski space, the quantity (4.34) can be interpreted as the forward scattering amplitude for two D-particles moving with relative velocity v = tanh πǫ and impact parameter b = |y ⊥ 1 − y ⊥ 0 |. It is a semi-classical result, in that the D-branes are treated as classical sources and higher worldsheet topologies are neglected, i.e. both the Compton wavelength and the Schwarzschild radius of the Dbranes are taken to vanish. The branes interact via virtual pairs of open oriented strings which are stretched by the relative motion. The integrand of (4.34) has an infinite number of poles along the real t-axis at t = 2π(2n + 1)/ǫ, where n is an integer. These poles arise from the zeroes of the trigonometric sine function in the product representation of the theta-function Θ 1 . As a consequence, the vacuum energy acquires an imaginary part which is given by the sum over the residues of the poles and which gives the probability that the virtual strings materialize. This phenomenon is simply the dual counterpart of the open string pair production in a uniform background electric field that we described in section II.C. In the present case the result has a much simpler interpretation. As the two D-particles move away from each other, they continuously transfer their energy to any open strings that stretch between them. A virtual pair of open strings can therefore nucleate out of the vacuum and slow down or even completely stop the relative motion. The real part of the scattering amplitude also reveals a striking feature of the low velocity dynamics of D-particles. The theta-functions Θ a (ν|τ ) are even functions of ν, and in the low velocity limit Θ 1 ( ǫt . The absence of a constant term in the velocity expansion of (4.34) is due to the cancellation of the gravitational attraction and the Ramond-Ramond repulsion for static D-branes [47]. However, not only the static, but also the order v 2 force between two D-particles vanishes, i.e. identical Type II D-branes do not scatter at non-relativistic velocities. Generally, the order v 2 scattering of heavy solitons can be described by geodesic motion in the moduli space of zero modes. Therefore, the moduli space of a pair of D-particles, which at tree level is the flat quotient space R 9 × R 9 /Z 2 , remains completely flat to all orders in the α ′ expansion. This is the dual statement of the fact that for maximally supersymmetric gauge theories the Maxwell F 2 term in the effective action is not renormalized. The next contribution comes at order v 4 in the velocity expansion. The expansion of the effective action for two D-particles in their velocities divided by powers of their separation r is thereby given as The v 4 potential in (4.36) is the standard interaction term for D0-branes in ten dimensional supergravity [22]. The vacuum energy functional (4.34) could actually have been determined using standard formulas for the partition functions of free massless fields with twisted boundary conditions. The spectrum of an open string stretched between two moving D-branes can be determined from the operatorial mode expansions of the light-cone fields x ± = 1 √ 2 (x 0 ∓ x 1 ) which are given by It is easy to verify that x 0 and x 1 then obey the boundary conditions (4.5). In particular, x ± (τ, 1) = e ±πǫ x ± (τ, 0), so that the two string endpoints have relative velocity v = tanh πǫ. Reality requires (a ± n ) † = a ± −n , while canonical quantization implies the commutation relations [a + n , a − m ] = (n + iǫ) δ n+m,0 . The D-brane motion modifies the vacuum energy as can be read off from the light-like component of the total worldsheet Hamiltonian where we have included the dependence on the impact parameter. Analogous mode expansions arise for the worldsheet fermion fields. The relevant effect of the brane motion on the stretched strings is to shift their oscillation frequencies by ±iǫ in the boost plane and their energy by an overall velocity dependent term. Similar expansions arise in the twisted sectors of orbifold conformal field theories, with iǫ identified as a realvalued rotation angle. Therefore, in the operator formalism, the problem of moving D-branes is formally identical to that of the stretched strings between the branes belonging to a twisted sector of an orbifold string theory with imaginary twist angle corresponding to the rapidity of the boost. All of this is again completely analogous to the spectrum of free open strings in a uniform electric field background, except for some important changes. The expression (4.9) for the twist parameter ǫ = iα has no obvious interpretation in the electric field case, while here it is recognized as the relativistic sum of the two brane velocities. Consistent with Lorentz invariance, the spectrum only depends on the velocity v of one brane in the rest frame of the other. Furthermore, zero modes are omitted in the light-cone mode expansions (4.37) to account for the fact that D-branes interact locally in transverse space and in time. Keeping this and the orbifold analogy in mind, it is straightforward to arrive at the annulus amplitude (4.34). The orbifold interpretation further enables a very simple calculation of the velocity dependent potential in (4.36). In the quasi-static approximation, which is valid to leading order in the inverse separation, the potential is given simply as the sum of the ground state energies of the corresponding harmonic oscillators as in (4.38). There are ten complex bosonic oscillators of frequencies ω ⊥ b = √ r 2 with multiplicity eight and ω ± b = √ r 2 ± 2iv with multiplicity one each, where r = √ b 2 + v 2 τ 2 is the distance between the branes at time τ . There are also two ghost oscillators each of frequency ω ghosts = √ r 2 , and 16 fermionic oscillators of frequencies ω ± f = √ r 2 ± iv with multiplicity eight each. The velocity dependent potential is then given by For v = 0 the frequencies cancel and the static potential vanishes. For v = 0 we can expand each frequency as a power series in r −1 . At the first three orders in v/r 2 the potential vanishes, while at fourth order the energy between the 0branes gives the expected leading result V (r) = −15v 4 /16r 7 + . . .. C. Thermodynamics Just as we did with the thermal configuration of free open strings in background electric fields, it is possible to demonstrate that there are no excited states of a pair of moving D-branes with v = 0 at finite temperature [3]. Again the partition function picks up only the temperature independent piece, and we may conclude that D-brane dynamics forbid uniform velocity motion at finite temperature. The triviality comes from the same zero mode operators, associated with the presence of a Wu-Yang term, as in the electric field problem. Using T -duality, we may therefore attribute this property of D-brane dynamics to the Debye screening of electric fields that we discussed in section III.B.2. Just as Debye screening forbids constant electric fields in open superstring theory, it also forbids the uniform motion of D-branes. This implies that there is a damping of their motion analogous to Debye screening. Recall that, in the dual picture, this is not the case for constant magnetic fields. At one-loop order this corresponds to a relative disalignment between a pair of branes, which is an allowed configuration at finite temperature. To investigate further the properties of Dbrane dynamics at finite temperature, one must consider appropriate non-uniform motion. This is a difficult problem to treat using the usual, direct methods of string perturbation theory, as is the dual problem for time-dependent background fields. However, one can compute the thermodynamic free energy for moving D0-branes by using the low-energy effective Yang-Mills theory description of the D-brane dynamics [58,10]. Then, a perturbative calculation will be valid in the domain where g 1/3 s √ α ′ ≪ r. We can therefore effectively describe the thermodynamics in the limit of weak string coupling, or equivalently when the branes are well separated. Since the D0branes have mass T 0 = 1/g s √ α ′ and are therefore very heavy in this limit, this calculation will take into account the thermal fluctuations of the stretched superstrings but not of the D-particles themselves. The action is obtained from the dimensional reduction of ten-dimensional maximally supersymmetric Yang-Mills theory to one temporal and zero spatial dimensions, where the Yang-Mills coupling constant g YM is related to the string coupling g s by g 2 YM = g s /4π 2 (α ′ ) 3/2 . The gauge fields A µ (τ ) and the Majorana spinor fields Ψ(τ ) depend only on the time coordinate τ . The diagonal components with the Euclidean time coordinate τ compactified on a circle of circumference β = 1/k B T . The effective action for the D-particle coordinates is constructed by integrating out the off-diagonal components of the gauge fields, the fermion fields, and the Faddeev-Popov ghost fields required for gauge fixing, with periodic boundary conditions for the gauge and ghost fields and anti-periodic ones for the adjoint fermion fields around the compact temperature direction. We will consider again only the case of a single pair of D0-branes whose worldlines lie along the periodic temporal direction. The integration in (4.43) can be done in a simultaneous loop expansion in the gauge theory and in a velocity expansion in the brane configurations. This will produce meaningful results in the limit where r ≡ | y 1 − y 2 | is large and where v a =˙ y a are small (Dot denotes differentiation with respect to τ ). The one-loop contribution can be obtained by expanding the action (4.40) to second order in the off-diagonal components of the gauge fields, and in the ghost and fermion fields. The result of this standard Gaussian functional integration produces a ratio of determinants of the form [2] where we have introduced the second order differential operator on the temperature circle, which arises from the gauge covariant derivatives. Here a µ = a 1 µ − a 2 µ , and we have included the tree-level term which gives the non-relativistic kinetic energies of the D0-branes. The subscript B (resp. F ) indicates that the determinant is to be evaluated with periodic (resp. anti-periodic) boundary conditions corresponding to the contributions from the gauge and ghost (resp. adjoint fermion) fields, respectively. The abelian field strength tensors f b µν have non-vanishing components f b 0i =ȧ b i . The temporal components a b 0 of the gauge fields may be taken to be independent of the compactified time variable via the residual abelian gauge invariance of the problem, and by periodicity to lie in the interval (− π β , π β ]. The determinants in (4.44) have been evaluated for static D-brane configurations in [2]. In what follows we shall extend this computation to leading orders in the velocity expansion for moving Dbranes. By using the proper time representation (2.34) we are led to first evaluate the trace We will begin by computing the determinant in (4.44) with bosonic boundary conditions on the temporal circle. For this, we insert the periodic delta-function into (4.46), which incorporates the proper Matsubara frequencies and gives where we have introduced the operators The expression (4.48) is viewed as operating on a constant 1, and the derivatives only contribute when they encounter terms involving | a| 2 . Note that generally the position variables a b (τ ) are only periodic up to a permutation of the identical D-particles, which ensures that the configuration of the coordinates is periodic. In the present case this means that the relative coordinate a(τ ) can be either periodic or anti-periodic. In both of these sectors, the distance | a(τ )| is a periodic function, and hence so is the operator A n . To unravel the expression (4.48), we use the generalization of the Baker-Campbell-Hausdorff formula e −t(An+Bn) = e −tAn e Cn e −tBn (4.50) where In the loop expansion we expand around the treelevel configuration whose equation of motion is a = 0. The only non-vanishing time derivatives of the operator A n are thenȦ n = 2 a ·˙ a and A n = 2|˙ a| 2 . Moreover, since the integrand of (4.48) is a periodic function, we can freely integrate by parts and drop surface terms. Using There are eight contributions of the form (4.52), one for each of the directions transverse to the plane of motion. From this result we must also subtract the fermionic contribution which comes from evaluating the determinant with anti-periodic boundary conditions. The net effect of inserting the antiperiodic delta-function into (4.46) is to replace a 0 by a 0 + π β everywhere, i.e. cos βa 0 → − cos βa 0 in (4.52). The final quantity we need is the velocity corrected determinant (the first two determinants of (4.44)). This can be straightforwardly evaluated by using the description given in the previous subsection of how the oscillator frequencies are modified in the velocity-dependent potential between two D0-branes. The leading order term 8 ln cosh β| a|−cos βa 0 cosh β| a|+cos βa 0 with no time derivatives is corrected at finite velocity to 6 ln(cosh β| a| − cos βa 0 ) + ln cosh β | a| 2 + 2i|˙ a| − cos βa 0 + ln cosh β | a| 2 − 2i|˙ a| − cos βa 0 − 4 ln cosh β | a| 2 + i|˙ a| + cos βa 0 −4 ln cosh β | a| 2 − i|˙ a| + cos βa 0 . (4.53) By summing the expansion of (4.53) to second order in the velocity |˙ a| and the eight order |˙ a| 2 contributions of (4.52), and subtracting the eight order |˙ a| 2 terms in (4.52) with cos βa 0 → − cos βa 0 , we arrive finally at the effective action complex plane. Here we have defined it so that it is odd under reflection of x but not periodic under the shift x → x + y. There is no way to preserve both of these symmetries, and this gives a simple example of an anomaly. where the D0-brane separation r is timedependent and obeys periodic boundary conditions on the temperature circle. The quantity T 0 2 is the reduced mass of the two D-particle system, while r 2πα ′ is the energy of a string which has Dirichlet boundary conditions on hypersurfaces a distance r apart. Note that the effective action is an odd function of the variable x = cos βa 0 . The action (4.54) simplifies in the limit βr ≫ 2πα ′ , and to leading orders in the low-temperature expansion we have The second term in (4.54) has a direct interpretation in string perturbation theory. One can compute the annulus diagram for the open superstring, in compactified Euclidean time of circumference β, whose ends lie on two stationary D0branes separated by distance r. The charges at the endpoints of the string couple to a constant U(1) gauge field which is parametrized by ν ∈ (−1, 1], and which enters the problem through the quantized temporal momentum p 0 = 2π(n−ν)/β, n ∈ Z + a−1 2 , of the open string whose worldsheet winds around the spacetime cylinder. Then, the one-loop thermal partition function of the string gas can be written as [28] Z str (β, r, ν) = (4.56) where the superstring spectrum is given by (4.57) with N the oscillator occupation number, and d N is the degeneracy of superstring states at level N which may be computed from the generating function For the lowest level we have d 0 = 8 and E 0 = r/2πα ′ . The factor of 2 in the power of (4.56) is again due to the exchange symmetry of the string endpoints. The partition function (4.56) is equal to the ratio of the Fermi and Bose distributions with power (twice) the degeneracy of states and the parameter iν playing the role of a chemical potential. The static limit v = 0 of (4.54) coincides with ln Z str truncated to the massless modes (N = 0) with the identification πν = βa 0 . As stressed in [2], the integration over a 0 of the effective action is required for gauge invariance of the free energy, or equivalently to enforce Gauss' law for the charges at the ends of the open string which are induced on D-branes. The effective potential (4.43) between D0-branes is thereby given from (4.54) as S eff [ y a ] = − ln 1 −1 dν e −S eff . The reason has a natural explanation in the closed string formulation, obtained by mapping the open string annulus diagram onto the cylinder diagram via the standard modular transformation. Then, the path integral describes the closed string propagator corresponding to the interaction between two D0-branes, rather than the thermal partition function as in the case of an open string. When two D0-branes interact, they can exchange several closed strings, not only one. As all such exchanges are of the same order in the string coupling constant, they exponentiate since the closed strings are identical and naturally produce the result (4.56) in the closed string language. Furthermore, in this formulation it is clear that there is only a single gauge field parameter ν for each multi-string term, because now the system is composed of just two interacting D0-branes rather than a gas of D-particles. These facts result in the effective potential as claimed. In the static limit, this potential is logarithmic and attractive at short distances. The singularity occurs as the D0-branes fall on top of one another, in which case the non-abelian gauge symmetry which is broken by separated branes is restored [58]. Then, the one-loop approximation breaks down, and this demonstrates that the thermodynamics of D0-branes must be treated as a problem in quantum statistical mechanics, defined by the path integral (4.42) over both periodic and anti-periodic trajectories y(τ ). On the other hand, the leading velocity corrections to the static thermal potential are repulsive at short distances. Note that these corrections are of order v 2 and vanish in the zero temperature limit, as expected. This illustrates that the moduli space of the two D-particle system is curved in a very non-trivial way by thermal effects. The calculation presented in this subsection can be extended to compute the thermal corrections to the order v 4 gravitational interaction between moving Dbranes, and thereby to shed more light on the dynamical role of D0-branes in black hole thermodynamics. V. NONCOMMUTATIVE D-BRANE GEOMETRIES A recent surge of activity in string theory and quantum field theory has come with the realization that D-branes in certain background supergravity fields lend an explicit realization to some old ideas that the classical notions of spacetime and general relativity at very short distances must be drastically altered. At these length scales quantum gravitational fluctuations cannot be ignored and spacetime is no longer described by a differentiable manifold. Following earlier suggestions, noncommutative geometry has been proposed as the appropriate mathematical framework to describe the short scale structure of spacetime, and in particular nonperturbative properties of string theory. The fact that quantum field theory on a noncommutative space arises naturally in string theory [51] and M-theory [20] suggests that spacetime noncommutativity is a general feature of a unified theory of quantum gravity. D-brane worldvolumes become noncommutative manifolds when there is a constant Neveu-Schwarz two-form field B µν on them. This field can be coupled to the usual open string σ-model (2.23) in the neutral limit e 1 = e 2 = e by adding the topological action −i Σ x * B. This term is a total derivative and so it only contributes to the boundary conditions on the string fields, not to their equations of motion. The endpoints of the string (the boundaries σ = 0, 1 of Σ) are now interpreted as ending on a D-brane of a certain dimensionality. The B field only appears in a gauge invariant combination with the U(1) gauge field on the brane as B µν = B µν − eF µν . Therefore, the uniform Neveu-Schwarz background field is equivalent to a constant electromagnetic field on the D-brane and so the following analysis will unify our discussions from earlier sections. Since the background fields can be gauged away in the directions transverse to the D-brane worldvolume, we shall only study the quantities associated with the worldvolume hyperplane itself. When the target space has Euclidean signature, the noncommutativity of the string endpoint coordinates can be understood through the analogy, discussed in section II.B.2, between the external field problem for open strings and the classic Landau problem. The σ-model action describing the coupling of strings to a magnetic field on the branes is formally equivalent to the Landau action which describes the motion of a particle of charge Q and mass M in the plane y = (y 1 , y 2 ) and in the presence of a uniform perpendicular magnetic field of magnitude B. Here A i = − B 2 ǫ ij y j is the corresponding vector potential. In the limit of a strong magnetic field B ≫ M (with M fixed), the action (5.1) is already expressed in phase space with the spatial coordinates y 1 , y 2 being the canonically conjugate variables. In canonical quantization, the position variables become noncommuting operators with [y i , y j ] = (i/QB) ǫ ij . The mass gap between Landau levels is QB/M, so that the limit of strong magnetic field projects the quantum mechanical spectrum of this system onto the lowest Landau level and the spatial coordinates live in a noncommutative space. As we will see in the following, this is precisely what happens to the string endpoints when there is a constant magnetic field on the D-branes, and the D-brane worldvolume becomes a noncommutative manifold. However, as one can anticipate from our earlier analyses, the picture changes drastically in Minkowski signature corresponding to an electric field on the branes. A. Magnetic fields and noncommutative field theory We will start with the case of Euclidean spacetime, so that B µν represents a uniform magnetic field on the D-branes, which we assume is of maximal rank. The open string boundary conditions are given by (2.24) with the replacements of −e k F µν by B µν everywhere. To see how noncommutative geometry arises on the D-brane worldvolume, we will use the operatorial, covariant quantization formalism of section II.B.1, but now in full generality and with a more careful analysis of the canonical quantization. The mode expansions which solve the bulk equations of motion ✷ x µ = 0 and the boundary conditions (2.24) are given by the familiar expressions where is the usual Born-Infeld factor. As is evident from the expression for the string propagator in the background field [51], the symmetric tensor (5.3) is the open string metric, i.e. the metric seen by the endpoints of the string, while 1 1 is the bulk, closed string metric that defines the σ-model action. We can now straightforwardly compute the equal-time, canonical commutation relations as described in section II.B.1. Those involving the worldsheet momentum density uniquely fix the usual Heisenberg commutation relations for the zero modes x µ , q ν and the standard Heisenberg-Weyl commutation relations for the oscillatory modes a µ n in the metric G. The subtle relation comes from the equal-time commutator [x µ (τ, σ), x ν (τ, σ ′ )] = 0 [4,19]. By using the Heisenberg-Weyl commutation relations and the mode expansion (5.2), this commutator is readily seen to be given by is the open string external field. By integrating the completeness relation (2.8) we may arrive at the Fourier series expansion for σ + σ ′ ∈ (0, 2). From (5.4) we see that for σ, σ ′ ∈ (0, 1) in the bulk of the string worldsheet, the canonical commutation relations may be satisfied by fixing the commutators of the zero mode position operators to be [y µ , y ν ] = i θ µν . The y µ therefore generate a noncommutative algebra of operators and are interpreted as coordinates on a noncommutative space. They guarantee that the equal-time commutators are unmodified in the bulk of the worldsheet. This must be the case, since the coupling to the external field only modifies the boundaries of the string worldsheet, not the interior. However, from (5.4) and (5.7) it now follows that the open string endpoint coordinates become noncommuting operators, x µ (τ, 0) , x ν (τ, 0) = i θ µν , with all other embedding field commutators vanishing. The commutation relations (5.8) arise from the compatibility of the open string boundary conditions with the standard commutators, and they imply that the presence of the B-field deforms the D-brane worldvolume to a noncommutative manifold. Note that the noncommutativity of the worldvolume coordinates cannot be probed by any closed string objects (such as supergravity fields). This is because of the flip in sign between the commutators (5.8) at the two ends of the string which arise from the change of orientation. The left and right moving modes receive equal and opposite contributions from the B field and the noncommutativity averages out in the region between the two D-branes. In fact, one can explicitly calculate that the open string center of mass coordinates x µ cm (τ ) = 1 0 dσ x µ (τ, σ) commute. Thus the transverse space remains an ordinary (commutative) manifold. Indeed, we recall that the external field in the neutral case does not change the physical spectrum of the theory. The only effect of the magnetic field is the change the metric 1 1 to the open string metric (5.3). Recall also from section II.C that the open string is not point-like, but rather behaves like a neutral magnetic dipole whose two endpoints are at different positions. The dipole grows in the direction transverse to the motion by an amount proportional to B µν q ν , and the fuzziness of space originates from its size [53,11]. It is also possible to see noncommutativity in the charged string case [18], which formally corresponds to different external fields B k = B−e k F on the two D-branes between which the open strings stretch. By using the mode expansions (2.28) and the canonical commutation relations (2.30,2.31) we find By using the identity ∞ n=1 2α α 2 − n 2 + 1 α = π cot πα (5.10) for α / ∈ Z, we may infer the noncommutativity relations with all other embedding field commutators vanishing [18], where Thus the noncommutativity is localized at the string endpoints and is determined by the field strengths on the D-branes. Exactly the same noncommutativity factors are obtained as if one quantized an individual open string ending on the same D-brane. This simply reflects the fact that noncommutativity is an intrinsic property of the brane worldvolume and not of the short-distance probe that is used. Notice also that the noncommutativity parameters are proportional to the string scale α ′ and thereby represent genuine stringy effects. The results are in fact exact to all orders in α ′ and the string coupling constant g s , because noncommutativity is a short distance effect which doesn't care about the worldsheet topology. The loop corrections to the above results have been analysed in [35] with the same conclusions. These same results can be reached by studying operator product expansions of open string vertex operators [49,51]. In this analysis one can identify a particular regime of the string theory in which the vertex operator algebra reduces to a deformation of the ring of functions f (y) on the D-brane worldvolume [36,51]. It corresponds to taking the correlated limits α ′ → 0 (the field theory limit), g s → 0 (weakly coupled strings), and B µν → ∞ (strong magnetic field), with the quantities (α ′ ) 2 B µν and g s √ det α ′ B finite. The open string metric (5.3) is given by G = −(2πα ′ B) 2 in this limit, since the closed string metric effectively scales out of the problem. Furthermore, from (5.2) it follows that the massive string modes are also scaled away from the endpoint zero modes. Thus all closed string states are completely decoupled from the problem, i.e. the gravitational modes are removed and an effective field theory remains. However, this is not a conventional field theory, because the noncommutativity parameter is also finite in this limit, θ = 1/B. Indeed, because of (5.7), the resulting projection of the vertex operator algebra is not an ordinary function algebra, but rather that which is obtained by deforming the pointwise multiplication f (y)g(y) of two functions to a product defined by a bidifferential operator of infinite order [36,51]. This is is given by the classic Moyal star-product f (y) ⋆ g(y) = exp i 2 θ µν ∂ ∂y µ ∂ ∂y ′ν f (y)g(y ′ ) y ′ =y (5.12) which is associative, but non-local and noncommutative. The commutation relations (5.7) may then be satisfied by replacing ordinary operator products with star-products of the coordinates y µ . Note that in this decoupling limit the σ-model action (2.23) reduces to a sum of two quantum mechanical, boundary actions for the endpoint charges which are each formally equivalent to the Landau action (5.1) in the limit B → ∞. This limit is thereby analogous to the projection onto the lowest-lying Landau level. The effects of noncommutativity from the string zero modes is emphasized in [33]. Proceeding as before, it can be shown that the effective field theory is given by a noncommutative generalization, obtained by replacing ordinary (commutative) products of fields with the Moyal product (5.12), of the Dirac-Born-Infeld action [38] on the D-brane worldvolume which describes non-linear electrodynamics on a fluctuating membrane [37]. This can be used to identify the effective open string coupling constant as [51] G s = g s det(1 1 + 2πα ′ B) . (5.13) After supersymmetrization, the low-energy effective field theory of noncommutative D-branes is noncommutative supersymmetric Yang-Mills theory with 16 supercharges (the number of supersymmetries preserved by the D-branes in the B field background) and spacetime metric G µν . The Yang-Mills coupling constant in the decoupling limit described above is given by g 2 YM ∝ G s = g s √ det 2πα ′ B. Quantum field theory on a noncommutative space appears to be the unique consistent deformation of ordinary quantum field theory. These theories exhibit a variety of novel effects which lead to new physics that are not encountered in conventional quantum field theories. Many of these effects have counterparts in string theory, and noncommutative field theories are believed to lie somewhere between ordinary field theory and string theory. For instance, one of the most important results is that infrared and ultraviolet effects do not decouple in a noncommutative field theory [44], which can be understood from the fact that the open string dipoles grow in size with their energy. The larger the momentum, the larger is the spatial extension of the object. Furthermore, noncommutative scalar field theories can contain stable soliton solutions even if their commutative counterparts don't [26], and these noncommutative solitons can be realized as D-branes in string field theory. Because of these striking features, intensive studies have been initiated which use noncommutative quantum field theory to study D-branes in the presence of a background magnetic field. B. Electric fields and noncommutative open string theory Let us now Wick rotate to Minkowski signature and consider a uniform electric field E = | E | on the branes, i.e. B ij = 0 and E i = −iB 0i = 0. Then θ 0i = 0 and the D-brane worldvolume is space/time noncommutative. There are several reasons why one is interested in such a noncommutative theory. First of all, the lack of commutativity of time is in conflict with our current understanding of quantum mechanics, where time is not an operator but rather a parameter which labels the evolution of the system. Understanding space/time noncommutativity may therefore shed light on the role of time in string theory and quantum gravity. Secondly, the space/time commutator implies the uncertainty relation ∆y 0 ∆y i ∼ θ 0i between time and space. This is simply the string space/time uncertainty principle that has been advocated as a generic property of string theory [59]. Finally, in the absence of external fields the effective supersymmetric Yang-Mills theory on the four-dimensional worldvolume of coincident D3-branes is known to possess an exact Montonen-Olive S-duality g YM ↔ 1/g YM . In the presence of a background electromagnetic field we expect this symmetry to act as an electricmagnetic duality exchanging electric and magnetic degrees of freedom. Naively then, we expect that the strong coupling dual of spatially noncommutative Yang-Mills theory in four dimensions to be a temporally noncommutative gauge theory. This latter line of reasoning is, however, incorrect. Noncommutative quantum field theory with a noncommuting time direction is neither unitary nor causal. It suffers from severe acausal effects such as events which precede their causes and objects which grow instead of Lorentz contract as they are boosted. For example, the open string electric dipoles extend longitudinally by an amount proportional to E · q. However, the string theory in a background electric field is, at least perturbatively, unitary and causal, as is evident in first quantization. Stringy effects eventually conspire to cancel the acausal effects that arise (for instance in the zero mode dipoles), and the model at the level of string theory is perfectly well-defined. Therefore, while the S-dual of the electric field problem for open strings is certainly the corresponding magnetic one, the noncommutative Yang-Mills theories cannot be related in such a manner. What has gone wrong is that the electric field problem does not possess a noncommutative field theory limit [52,27]. Recall from the previous subsection that one of the decoupling limits involved making the external magnetic field arbitrarily large. In the electric case, the system destabilizes above the critical value E c . This instability is now reflected in the singularities that arise at E = E c in the open string parameters (5.3), (5.5) and (5.13). It prevents the correlated limit of the previous subsection from being taken. The electric field cannot be scaled to infinity, and so one cannot reach the field theoretic limit α ′ → 0 in which all string oscillator modes decouple. The key point though is that the effective tension of an open string stretched along the direction of the electric field is given by (3.10). One can take a limit in which α ′ → 0, and the theory is space/time noncommutative and decouples the bulk modes, including gravity, off the branes. However, the effective string scale α ′ eff = 1/2πT eff is finite in this limit and the effective theory will be a string theory, not a field theory. For this, we rotate so that the electric field lies along the 1-axis, and rescale the coordinates so that the diagonal elements of the closed string metric in the 0-1 plane are proportional to [1 − (2πα ′ E) 2 ] −1 . We then take the limit whereby the electric field becomes critical, 2πα ′ E → 1, and α ′ → 0 with α ′ eff fixed. For finite α ′ , the open strings are effectively tensionless in the limit E → E c and the open string metric (5.3) is finite. The closed string metric scales to infinity, and the Moyal phases are determined by the effective string scale as θ = 2πα ′ eff and are therefore finite. Recall now that the neutral open string spectrum is unaltered by the electric field, and so the open string states are of finite mass. Thus we are left with an open string theory on the D-brane worldvolume which is a space/time noncommutative manifold. The fact that the noncommutativity scale is intrinsically tied to the string scale means that in order to make sense of a noncommutative space/time manifold, one needs to make precise the notion of an Einstein spacetime at the string scale. The main property of this string theory is that, unlike ordinary string theory which requires closed string states for its consistency, it is completely decoupled from the bulk worldsheet states. To see this, suppose that a light open string state tries to escape to the bulk by turning into a closed string state (via a modular transformation). For this to occur, the stretched open string has to bend over in order for its endpoints to touch each other. Part of it will stretch against the electric field and will thereby become very heavy as E → E c . Thus the closed string modes become infinitely massive, and energetics prevent the open strings which live on the branes from turning into closed strings and propagating into the bulk. Note that, according to (5.13), this string theory is interacting provided we scale the closed string coupling g s → ∞. Therefore, these open strings describe a particular limit of strongly coupled closed strings in a critical electric field. We conclude that in the low energy limit considered above, the effective theory includes interacting open strings on the D-branes together with decoupled free closed strings in the bulk region. The open string theory is decoupled from gravity, and the underlying spacetime on the Dbranes is noncommutative. This theory is known as noncommutative open string theory [52]. In the case of D3-branes it is the strong coupling dual of supersymmetric noncommutative Yang-Mills theory in four dimensions [27]. The action involving these open strings is related to the action of ordinary open string theory by the replacement of all ordinary products of string fields with the appropriate noncommutative Moyal products (5.12). The thermal ensembles, and in particular the Hagedorn behaviour [30], of this string theory are particularly interesting since this theory does not contain closed strings and decouples from gravity. In the conventional superstring the-ory, which is difficult to study because of the thermodynamic instabilities that arise in gravitating systems, there is a first order phase transition below the Hagedorn temperature [6]. In the present case, one finds that, in the scaling limit and as the temperature is increased, a massless closed string state appears in the bulk at precisely the Hagedorn temperature (3.9) arising from the open string density of states. The Hagedorn transition in this case is a second order phase transition, and the high temperature phase involves long fundamental strings separating from the D-branes on which the noncommutative open string theory is defined [30].
2017-09-16T19:36:26.160Z
2000-12-11T00:00:00.000
{ "year": 2000, "sha1": "461bdc39980bf60097b1a4ffa5d146fcddec14fa", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-th/0012092", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "4c2d70b6734d355feaa6d5172172560ac301d708", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
258213883
pes2o/s2orc
v3-fos-license
Generating a mouse model for relapsed Sonic Hedgehog medulloblastoma Summary Tumor relapse is the leading adverse prognostic factor in medulloblastoma (MB). However, there is still no established mouse model for MB relapse, impeding our efforts to develop strategies to treat relapsed MB. We present a protocol for generating a mouse model for relapsed MB using irradiation by optimizing mouse breeding and age, as well as irradiation dosage and timing. We then detail procedures for determining tumor relapse based on tumor cell trans-differentiation in MB tissue, immunohistochemistry, and tumor cell isolation. For complete details on the use and execution of this protocol, please refer to Guo et al. (2021).1 SUMMARY Tumor relapse is the leading adverse prognostic factor in medulloblastoma (MB). However, there is still no established mouse model for MB relapse, impeding our efforts to develop strategies to treat relapsed MB. We present a protocol for generating a mouse model for relapsed MB using irradiation by optimizing mouse breeding and age, as well as irradiation dosage and timing. We then detail procedures for determining tumor relapse based on tumor cell trans-differentiation in MB tissue, immunohistochemistry, and tumor cell isolation. For complete details on the use and execution of this protocol, please refer to Guo et al. (2021). 1 BEFORE YOU BEGIN Background Medulloblastoma (MB) is the most common type of brain tumor in children, comprising four principal groups: WNT Group, Sonic Hedgehog (SHH) Group, Group 3 and Group 4. The most adverse prognostic factor of all MB diagnoses is tumor relapse, occurring in approximately 30% of all cases, and often fatal. 2 The functional heterogeneity within relapsed MB tumors presents significant therapeutic challenges. 3 To recapitulate this in a mouse model, we use the conditional deletion of Patched 1 (Ptch1) in cerebellar granule neuron precursors using Math1-Cre mice, resulting in MB formation in Math1-Cre/Ptch1 loxp/loxp mice with 100% penetrance. 4 By further lineage tracing and genomic sequencing, recent studies reveal that tumor cells trans-differentiate into astrocytes in relapsed MB. 1 Calculate irradiation dosage/timing (formula) Model 280 Cesium Irradiator Dose Rates: Note: Calculation of dosage rates varies for different irradiator devices. Dosage rates should be adjusted and corrected for decay according to irradiator device (e.g., Model 280 Cs137 decay factor = 0.9886 in 6 months). Generation of MPG mice that will develop MB in their cerebella. a. Cross Math1-Cre mice with Ptch1 loxp/loxp mice to obtain Math1-Cre/Ptch1 loxp/wt mice. b. Cross Ptch1 loxp/loxp mice with R26R-GFP mice to obtain Ptch1 loxp/wt /R26R-GFP mice. c. Cross Math1-Cre/Ptch1 loxp/wt mice with Ptch1 loxp/wt /R26R-GFP mice to obtain MPG mice. Note: R26R-GFP mice are not necessary for the generation of the relapse model. We use R26R-GFP mice to lineage-trace tumor cells in relapsed MB (Mao et al. 7 ). OPEN ACCESS Note: All MPG mice develop MB in their cerebella. MPG mice may display cranial tumor signs including ataxia, hunched back and tilted head starting from 3-4 weeks of age. Note: To determine the optimal stage of tumor development for generating tumor relapse, MPG mice at 2 or 4 weeks of age are used for irradiation. Part 2: Irradiation Timing: 30 min (for step 2) Timing: dosage dependent; $ < 5 min/mouse (for step 3) Timing: 12 h-14 days (for step 4) Note: A license authorizing the use of sealed sources containing radioactive material may be required to operate the irradiator. Consult the institutional irradiation safety office for the regulations. Prepare Mice for Irradiation. Mice must be properly anesthetized for the irradiation procedure to minimize pain and distress, as well as movement during the procedure to ensure accurate dosages of irradiation. a. Prepare 10 mL Ketamine/Xylazine Anesthetic Solution. i. Prepare 1 mL of Ketamine (100 mg/mL) at a final concentration of 10 mg/mL. ii. Prepare 0.25 mL of Xylazine (20 mg/mL) at a final concentration of 0.5 mg/mL. iii. Mix the above with 8.75 mL of saline solution to prepare a final 10 mL anesthetic solution. *The above solutions are prepared under sterile conditions. b. Administer Ketamine/Xylazine via intraperitoneal (IP) injection. i. Measure body weight of mice using an animal weighing scale. ii. Inject mice with the Ketamine/Xylazine solution (10 mL/g body weight). * Appropriate depth of anesthetizing in mice should be confirmed by verifying a lack of pain response by pinching tails (no longer than 10 min after injection of Ketamine/ Xylazine solution). CRITICAL: Ketamine is a controlled substance, which should be used following the guidelines of institutional IBC. CRITICAL: Mice should be euthanized if they display signs of ketamine overdosage (respiratory depression). c. Position and shield mice for irradiation. i. Position the mouse (4 weeks of age) lying prone in a leucite drawer (As shown in Figure 1), and make sure that the cerebellum is centered inside of the circular irradiation area. ii. Expose the cerebellum, but shield the rest of the brain with the lead cover. Note: The cerebellum is located between the two lines (Line 1 and Line 2 in Figure 1E). Line 1 is between the two eyes, and line 2 is aligned with the base of the skull. Perform Irradiation Procedure. Following approval, irradiation procedure is conducted to reduce tumor burden allowing subsequent tumor relapse. a. Obtain instruction and approval from the Institutional Radiation Safety Committee before using the irradiator machine. ii. Shield the mouse brain using the lead cover, exposing the cerebellum region. iii. Cover the drawer with mouse inside by replacing the leucite lid. d. Set the irradiation time(s) and begin procedure. e. Remove mouse from the drawer, repeat for additional mice (steps 3-5). Note: To maintain a sterile condition, the leucite drawer can be wiped with a biocide solution in between each mouse irradiation. f. Secure irradiator after use. Note: If any problems or questions, contact the radiation safety office and read all operating procedures. Do not attempt any adjustment or repairs without authorization. CRITICAL: Make sure the mouse does not move at all in the drawer during the irradiation procedure. Any movement could potentially alter the irradiation dosage that the mouse cerebellum has actually received. Post-Irradiation Care. Mice must receive the proper post-irradiation care for their optimal survival throughout the experiment. a. Post irradiation, place mice on top of warm isothermal pad until they regain upright posture and walk normally. b. Return all of the irradiated mice to a sterilized cage, with adequate food and water. c. Routinely monitor mice twice daily or more often if poor health is observed (fatigue, ataxia, dehydration or anorexia). Detection of relapsed tumor by MRI. Tumor relapse and tumor volume in MPG mice after the irradiation are measured via MRI. a. Anesthetize the mouse by Ketamine/ Xylazine as described in step 2a. b. Visualize the tumor bearing brain by MRI using a GE MRI scanner (repetition, 3,450 ms; echo time, 159 ms; 12 slices at 0.8 mm per slice). c. Analyze the obtained MRI images using a T2-fast spin echo sequence. 6. Tumor Tissue Collection and Processing. Dissect the mouse brain and minimize disruption of brain tissues during collection. a. Euthanize mice according to the guidelines of the Institutional Animal Care and Use Committee (IACUC). MPG mouse can be euthanized by CO 2 or by cervical dislocation following the routine procedure approved by the IACUC at Fox Chase Cancer Center. b. Decapitate the mouse head with a cut posterior from the ears using surgical scissors. Using the scissors, make a midline incision in the skin. Flip the skin over the eyes. c. Hold the mouse head with forceps, access the brain by inserting microdissection scissors horizontally into the foramen magnum and cutting straight between the eyes. d. Using forceps peel away the skull to expose the forebrain and cerebellum. Cut and remove the brain stem (anterior to the cerebellum) and forebrain as much as possible. e. Carefully rinse the cerebellum (containing MB tumor) with PBS. 7. Prepare Tumor Tissues for Immunohistochemistry. Brain and tumor tissues are processed for further immunohistochemistry analysis. a. Carefully place the cerebellum in a 15 mL tube filled with 4% PFA for fixation, and incubate overnight ($12 h) at 4 C. b. Remove the cerebellum from 4% PFA solution and transfer to a new 15 mL tube filled with 30% sucrose for dehydration. Incubate at 4 C for 24 h or until the cerebellum sinks to the bottom of the tube. c. Embed the cerebellum in optimal cutting temperature compound (OCT compound), and freeze the block at À80 C overnight ($12 h). d. Place the block in a À20 C cryostat for at least 1 h to equilibrate the tissue before proceeding to cryosectioning. e. Cut 8-12 mm thick frozen sections of the tumor-bearing cerebellum using a cryostat, and mount sections on microscope slides. 8. Immunofluorescent Staining. Brain tissues are harvested for immunofluorescence and microscopy analysis to detect changes in tumor cell proliferation, apoptosis, and astrocytic trans-differentiation patterns in relapsed tumor. a. Rinse the tumor slides with PBST. b. Carefully pipet 100 mL of 10% Normal Goat Serum (NGS) to cover entire section. Incubate for blocking at room temperature (20 C-25 C) for 20-30 min. EXPECTED OUTCOMES We optimized the dosage and age of tumor-bearing mice for irradiation, based on the survival after the irradiation (Table 1). Our results suggest that 0.5 Gy irradiation is a relatively safe dosage for Ptch1-deficient mice at 3 weeks of age. Justification: Most MPG mice at 6 weeks of age died within 3 days following the irradiation at dosages ranging from 0.5-2 Gy, suggesting that irradiation is lethal for MPG mice at 6 weeks of age. Although a significant proportion of mice at 3 weeks of age succumbed to the irradiation at dosages of 1, 1.5, or 2 Gy, all mice (6/6) survived 0.5 Gy irradiation, suggesting that 0.5 Gy is a safe dosage for irradiating mice at 3 weeks of age. Further MRI analyses revealed that irradiation at 0.5 Gy substantially reduced the volume of tumor in Ptch1-deficient mice (Figures 2A and 2B), and tumor volume significantly increased at 2 weeks following the irradiation ( Figure 2C). Consistent with the tumor volume changes following the irradiation, extensive apoptosis was detected in tumor tissues within 3 days after the irradiation (Figures 3A and 3B). The percentage of apoptotic cells in tumor tissues at 2 weeks post-irradiation was reduced compared with that at 3 days following the irradiation, but was still increased compared with that in control tumor tissue (Figures 3C and 3D). Tumor cell proliferation was significantly inhibited by the irradiation (Figures 3E and 3F), as expected. However, tumor cells resumed their proliferation after 2 weeks following the irradiation (Figures 3G and 3H). Consistent with our previous report, 1 astrocytes were found negative for GFP in tumor tissue from MPG mice ( Figure 4A), suggesting that astrocytes and tumor cells were lineage-separated in primary MB. However, following the irradiation, an increased number of GFP positive astrocytes were detected in tumor tissues ( Figure 4B). Until 2 weeks after the irradiation, majority of astrocytes were GFP positive ( Figure 4C), indicating that most of astrocytes in the relapsed tumor originate from tumor cells. QUANTIFICATION AND STATISTICAL ANALYSIS For quantification of immunofluorescent staining of brain tissue frozen sections, five fields were counted for each sample with each field containing approximately 2,000-2,500 cells. We reported the averages of these five fields. Statistical analysis was performed using SPSS Statistics software. LIMITATIONS The optimal irradiation dosage and age of mice (tumor volume) for tumor relapse are determined using Math1-Cre/Ptch1 loxp/loxp mice in this study, which may not be applicable to other medulloblastoma mouse models such as NeuroD2-SmoA1 mice. TROUBLESHOOTING Problem 1 Mouse moves before or during irradiation treatment. If the mouse moves in the chamber during irradiation exposure ( Figure 1D), it may cause: The mouse will not receive the expected dosage of irradiation; The lead shield may not properly protect the mouse. Potential solution Mice may start to move during the irradiation, if too much time has passed after anesthetic solution injection. To avoid this, irradiate the mice within 30 min following the anesthetic solution injection. If needed, additional anesthetic solution (please refer to step 2a) may be applied to immobilize the mice during the irradiation. Note that it is not recommended to administer an additional dosage of irradiation if mice move during the irradiation. To achieve consistency and avoid outliers in the experiment, repeat if needed with a new animal. Problem 2 Regional difference in the distribution of cell apoptosis in tumor tissues after irradiation. If the lead shield covers parts of cerebellar regions, it may block the exposure of tumor tissue to the irradiation, leading to uneven distribution of irradiation-induced apoptosis in tumor tissues. Potential solution Position the radiation area to the cerebellar region (tumor-bearing area) between the line aligning the mouse's ears and the line aligning the skull base (please refer to Figure 1D). Make sure that the lead shield does not cover any part of the radiation area. Problem 3 Mice die following irradiation. Post-irradiation, mice may present lethargic, decreased mobility and ataxia, which typically disappear after 3 days following the irradiation (step 4). However, too much irradiation exposure, excessive tumor burden before the irradiation, or lack of post-irradiation care may increase the mortality rate of irradiated mice. Potential solution Always irradiate the Ptch1-deficient MB-bearing mice with a dosage less than 1 Gy. Extensive cell apoptosis caused by irradiation in tumor tissues from Ptch1-deficient mice over 3-6 weeks of age, leads to a significant increase of intracranial pressure in mice after the irradiation, which often causes the death of irradiated mice. It is recommended to use Ptch1-deficient mice at 3 weeks of age. If needed, 20% Mannitol (1 g/kg of body weight) by intravenous (IV) injection 8 may help to reduce the intracranial pressure in mice after the irradiation. A nutritionally fortified water gel can be put into the cage to aid the recovery of irradiated mice. Problem 4 Excessive background signal in irradiated tumor tissues after immunofluorescence. Irradiation causes extensive cell death/apoptosis in tumor tissue, which may cause too much unspecific antibody staining. Potential solution Extend the blocking period to 2 h with 10% NGS before incubation of tumor tissues with the primary and secondary antibodies. If still experiencing unspecific staining, add 10% BSA to 10% NGS to block the tumor tissue before application of antibodies. RESOURCE AVAILABILITY Lead contact Further information and requests for resources and reagents should be directed to and will be fulfilled by the lead contact, Yijun Yang (Yijun.Yang@fccc.edu) or Zeng-Jie Yang (Zengjie.Yang@ fccc.edu). Materials availability No newly generated materials are associated with this protocol. Data and code availability No datasets were generated for analysis in this protocol. No unique code was generated for this study.
2023-04-20T06:16:20.838Z
2023-04-18T00:00:00.000
{ "year": 2023, "sha1": "2a46421cf705ef75292fcfd156f1f6f9ae2ee9aa", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.xpro.2023.102234", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "65d573ff470021d058df45ddc24f39ac7c3877ec", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
247920376
pes2o/s2orc
v3-fos-license
VCAM-1 Targeted Lipopolyplexes as Vehicles for Efficient Delivery of shRNA-Runx2 to Osteoblast-Differentiated Valvular Interstitial Cells; Implications in Calcific Valve Disease Treatment Calcific aortic valve disease (CAVD) is a progressive inflammatory disorder characterized by extracellular matrix remodeling and valvular interstitial cells (VIC) osteodifferentiation leading to valve leaflets calcification and impairment movement. Runx2, the master transcription factor involved in VIC osteodifferentiation, modulates the expression of other osteogenic molecules. Previously, we have demonstrated that the osteoblastic phenotypic shift of cultured VIC is impeded by Runx2 silencing using fullerene (C60)-polyethyleneimine (PEI)/short hairpin (sh)RNA-Runx2 (shRunx2) polyplexes. Since the use of polyplexes for in vivo delivery is limited by their instability in the plasma and the non-specific tissue interactions, we designed and obtained targeted, lipid-enveloped polyplexes (lipopolyplexes) suitable for (1) systemic administration and (2) targeted delivery of shRunx2 to osteoblast-differentiated VIC (oVIC). Vascular cell adhesion molecule (VCAM)-1 expressed on the surface of oVIC was used as a target, and a peptide with high affinity for VCAM-1 was coupled to the surface of lipopolyplexes encapsulating C60-PEI/shRunx2 (V-LPP/shRunx2). We report here that V-LPP/shRunx2 lipopolyplexes are cyto- and hemo-compatible and specifically taken up by oVIC. These lipopolyplexes are functional as they downregulate the Runx2 gene and protein expression, and their uptake leads to a significant decrease in the expression of osteogenic molecules (OSP, BSP, BMP-2). These results identify V-LPP/shRunx2 as a new, appropriately directed vehicle that could be instrumental in developing novel strategies for blocking the progression of CAVD using a targeted nanomedicine approach. Introduction Calcific aortic valve disease (CAVD) is the most common heart valve disorder with increased prevalence in people over 65 years [1,2]. The inflammation plays a crucial role in the onset and progression of CAVD, which starts with valvular endothelial cells (VEC) inflammation and dysfunction that contribute to the immune cell recruitment in the subendothelial space, and the creation of a microenvironment that favors activation and osteodifferentiation of valvular interstitial cells (VIC). The latter, actively contributes to aortic valve fibrosis and calcification. Calcification of the aortic valve impairs the valvular motion and impedes the blood flux towards the aorta, leading to cardiac hypertrophy and ultimately, heart failure [2,3]. There are no efficient pharmacological therapies to prevent or reverse CAVD [4], the only effective interventions being surgical aortic valve replacement or the minimally invasive transcatheter aortic valve implantation (TAVI). Lately, RNA interference (RNAi) emerged as a powerful strategy in gene silencing for therapeutic purposes. The first small interfering (si)RNA-based therapy was approved by the Food and Drug Administration (FDA) in 2019 for the treatment of polyneuropathy in people with hereditary transthyretin (TTR)-mediated amyloidosis. It is commercialized under the name Onpattro ® (patisiran), and consists of a liposome formulation of siRNA designed to target a sequence of TTR mRNA and deliver it to the liver (https://www.onpattro.com/, accessed on 25 March 2022). The liposomes comprise ionizable cationic lipids (DLin-MC3-DMA, (6Z,9Z,28Z,31Z)-Heptatriaconta-6,9,28,31tetraen-19-yl 4-(dimethylamino) butanoate), phospholipid (DSPC, 1,2-distearoyl-sn-glycero-3-phosphocholine), cholesterol, and polyethylene glycol-modified lipids (PEG2000-C-DMG, 1,2-dimyristoyl-rac-glycero-3-methoxypolyethylene glycol-2000). Once injected into the blood, the lipid nanoparticles are opsonized by apolipoprotein E (ApoE), bind to ApoE receptors on hepatocytes, and are internalized by endocytosis. Following endocytosis, the ionization of the lipid component takes place, a fact favoring the fusion between the liposome and endosomal membranes and the release of the entrapped siRNA into the cytoplasm. The endogenous RNAi mechanism of the cells is employed to process siRNA before binding to the target messenger RNA and degrading it [5]. For cardiovascular disease (CVD) treatment, some encouraging results using RNAi were obtained in animal models, highlighting its high potential as a novel therapy for CVD [6]. Currently, one medication, namely Leqvio ® (inclisiran), was approved by the European Medicines Agency (EMA) to lower cholesterol blood levels in people with hypercholesterolemia or mixed dyslipidemia [7]. Inclisiran is a synthetic, small interfering (si)RNA directed against proprotein convertase subtilisin/kexin type 9 (PCSK9) mRNA, conjugated to triantennary N-acetylgalactosamine carbohydrates to ensure a specific binding to asialoglycoprotein receptors expressed on hepatocytes. In this study, we aimed at developing a nanocarrier suitable for in vivo administration and able to perform targeted delivery of shRNA-Runx2 (shRunx2) to affected VIC. When administered in vivo, polyplexes, due to the overall positive charge, readily interact with plasmatic proteins and accumulate within a few minutes in the reticuloendothelial organs such as the liver or spleen [10]. Thus, strategies to shield polyplexes against non-specific interaction once injected in vivo and direct them to specific cells or tissues are required. It has been reported that lipopolyplexes, namely lipid-coated polyplexes, exhibit superior colloidal stability, reduced cytotoxicity, and higher gene transfection efficiency as compared to polyplexes [11][12][13]. Therefore, we envisioned the development of liposome-encapsulated preformed C60-PEI/shRunx2 polyplexes (lipopolyplexes) that can be functionalized with a suitable ligand to allow specific cellular delivery and increased transfection efficiency. The vascular cell adhesion molecule (VCAM)-1, a transmembrane sialoglycoprotein, was intensively used for targeted drug delivery to endothelium owing to its inducible expression on the cell's surface in pathological conditions [14][15][16]. Moreover, there is an increased expression of VCAM-1 in aortic VIC after exposure to IFN-γ and LPS [4] or high molecular weight Poly (I: C), a dsRNA mimic [17]. In addition, VCAM-1 is highly expressed in the aortic valve of diabetic/atherosclerotic ApoE-deficient mice and thus, it is an appropriate target for nanocarriers developed to block the progression of CAVD [18]. We report herein the design, preparation, and characterization of lipopolyplexes (LPP) functionalized with a peptide recognizing VCAM-1 and encapsulating C60-PEI/shRunx2 polyplexes (V-LPP/shRunx2) and validation of their functionality in reducing the osteogenic differentiation of aortic valve interstitial cells. Design and Characterization of VCAM-1 Targeted Lipopolyplexes The VCAM-1 targeted lipopolyplexes consisting of PEG-stabilized liposomes containing the C60-PEI/shRNA polyplexes inside and coupled with VCAM-1 recognizing peptide were obtained by the procedure schematically presented in Figure 1. Schematic representation of the successive steps in the synthesis of VCAM-1 targeted lipopolyplexes encapsulating the shRNA plasmid lipopolyplexes (V-LPP/shRNA). First, the core-shell structures consisting of fullerene (C60) core and branched low molecular weight polyethyleneimine (PEI) (2kDa) were complexed with the plasmid containing shRNA sequences specific for Runx2 (shRunx2) or scrambled sequences (shCTR). Second, the anionic phospholipid DOPG dissolved in chloroform/methanol was added to the positively charged C60-PEI/shRNA polyplexes to form reverse micelles entrapping the polyplexes and, third, the organic phase was removed by reverse-phase evaporation under reduced pressure in the presence of coating phospholipids (POPC, Metoxi-PEG2000-DSPE, and Mal-PEG2000-DSPE). The resulting lipopolyplexes, namely PEGstabilized lipid-coated particles containing the C60-PEI/shRNA polyplexes inside were subsequently extruded through polycarbonate membranes to achieve a narrow and unimodal size distribution of the lipopolyplexes suspension. Next, the VCAM-1 recognizing peptide having an amino acid sequence terminating in cysteine was coupled via a reaction between thiol and maleimide-derivatized PEGylated phospholipid (Mal-PEG2000-DSPE) from the liposome's membranes. The HPLC data indicated an amount of 6.5 µg peptide per µmol lipid coupled to the surface of liposome-encapsulated polyplexes. The quantification of fluorescence of shRNA plasmid encapsulated into LPP using Quant-iT PicoGreen reagent revealed~9.8 µg shRNA plasmid DNA entrapped per µmol lipid in the lipid-coated particles (98% encapsulation efficiency). DLS measurements indicated an average hydrodynamic diameter of~190 nm for V-LPP/shCTR (Table 1, Figure 2C), a figure that is in good agreement with TEM observations. The lipopolyplexes population was homogenous, as reflected by the heterogeneity polydispersity index (PDI) of~0.2 [19]. The decreased hydrodynamic diameter of the LPP, compared to the naked C60-PEI/shCTR polyplexes (~270 nm) (Table 1, Figure 2A), indicates the organization of the lipid bilayer structure around the C60-PEI/shCTR core, leading to more compact nanoparticles. Zeta potential measurements showed that the positive charge of the C60-PEI/shCTR polyplexes at an N/P ratio of 25 (+15 mV) was completely enveloped by the lipid coat, resulting in stable negatively charged nanoparticles (~−30 mV) (Table 1, Figure 2B,D). To examine the stability of lipopolyplexes in time, we measured their size for intervals up to 4 weeks of storage at 4 • C. The data showed no significant changes in the size and ζ-potential of lipopolyplexes V-LPP/shCTR, which are relatively stable within one month compared to polyplexes C60-PEI/shCTR that showed a gradual increase in their dimension with time (Table 1). Negative-staining transmission electron microscopy (TEM) revealed that V-LPP/shCTR lipopolyplexes appear as round structures surrounded by a lipid coat, with uniform size distribution having~200 nm diameter ( Figure 3A). The colloidal stability of polyplexes and lipopolyplexes was assessed by measuring their size after incubation in PBS for different time intervals ( Figure 3B). The mean hydrodynamic diameter of C60-PEI/shCTR polyplexes increased by~50% after 16 min of incubation in PBS. By comparison, the size of lipid-coated polyplexes (V-LPP/shCTR) was relatively unchanged at the end of the incubation period. The electrolyte-induced flocculation study performed by the exposure of lipopolyplexes and polyplexes to different sodium chloride concentrations showed that the lipopolyplexes maintained their particle size of~200 nm for all NaCl concentrations investigated (up to 5%) ( Figure 3C). By comparison, for the non-encapsulated polyplexes, a gradual increase in particle size was detected starting at 0.9% NaC l , when the size was doubled compared with that measured in the absence of the electrolyte, and reached a value of 1µm for concentrations above 4% NaC l . The VIC activation and osteodifferentiation were induced by exposing the cells to the culture medium containing 25 mM glucose and osteogenic factors (HGOM) as previously reported [9]. The VCAM-1 expression on the cells' surface was assessed by cultivating VIC in a normal medium (NM) or HGOM for 1, 2, and 7 days. The surface expression of VCAM-1 was evaluated by flow cytometry. The results showed that VCAM-1 is constitutively expressed on some of the VIC's surfaces. Approximately 40% of VIC grown in NM expressed VCAM-1. However, the expression of VCAM-1 was significantly enhanced by exposure of VIC to HGOM when~80% of the cells were positive for VCAM-1 ( Figure 4A). The bar graph shows data as mean ± S.D. The overlays of representative flow cytometry histograms are presented above the graphs. (B) Fluorescence microscopy exemplifying the uptake of V-LPP/Cy3 labeled plasmid by VIC exposed to NM or HGOM for 5 days before incubation with lipopolyplexes for 24 and 48 h in the absence or the presence of an excess of V-BP (V-LPP/Cy3 (red); scale bar: 100 µm). (C) Quantification of lipopolyplexes uptake expressed as fluorescence intensity of red pixels percentage for each image field (each point represents media of 6 fields) using ImageJ software. The bar graph shows results as mean ± S.D. (D) Flow cytometry data showing the uptake of V-LPP/Cy3 labeled plasmid by VIC exposed to NM or HGOM for 5 days before incubation with lipopolyplexes for 48 h in the absence or the presence of excess V-BP. The results are determined as Mean Fluorescence Intensity (MFI) and plotted from a single experiment using triplicate probes. Representative flow cytometry charts are shown above the graph. Statistical significance: * p < 0.05, ** p < 0.01, *** p < 0.001. VCAM-1 Targeted Lipopolyplexes Are Efficiently Taken up by Osteogenic Differentiated VIC To follow the internalization of VCAM-1 targeted lipopolyplexes by VIC, the cells were grown in NM or HGOM (5 days) and incubated for 24 and 48 h with VCAM-1 targeted lipopolyplexes containing C60-PEI/Cy3-labeled plasmid polyplexes (V-LPP/Cy3) in the absence or presence of an excess concentration of VCAM-1 binding peptide (V-BP). Then, the cells were processed for fluorescence microscopy. As shown in Figure 4B, the V-LPP/Cy3 uptake is specific and mediated mainly by VCAM-1 molecule, as attested by the reduced fluorescence when the uptake was performed in the presence of an excess V-BP. Also, the uptake is increased in the case of HGOM-activated VIC as compared with the V-LPP/Cy3 uptake by VIC exposed to NM at both investigated intervals. The quantification of lipopolyplexes uptake by VIC, expressed as fluorescence intensity signal measured by red pixels fluorescence (ImageJ software version 1.8.0) is shown in Figure 4C. The flow cytometry data obtained in the case of incubation of VIC with V-LPP/Cy3 (48 h) were in line with the fluorescence microscopy results showing that the uptake of V-LPP/Cy3 plasmid lipopolyplexes was higher for VIC exposed to HGOM compared with the up-take by VIC grown in NM ( Figure 4D). The uptake is dependent on VCAM-1 expression since the addition of excess V-BP competed with V-LPP uptake and impeded the efficient internalization ( Figure 4D). V-LPP/shRunx2 Lipopolyplexes Downregulate Runx2 Expression in Osteoblast-Differentiated VIC Since we found that VCAM-1 targeted lipopolyplexes are specifically taken up by HGOM-activated VIC and delivered the plasmid cargo intracellularly, we investigated the downregulation of the Runx2 transcription factor in osteoblast-differentiated VIC by V-LPP/shRunx2. Previously, we have shown that the exposure of VIC to HGOM significantly increases the mRNA and protein levels of transcription factor Runx2, a key player in osteodifferentiation of VIC [9]. Indeed, in Figure 5 it can be observed that the exposure of VIC to HGOM for 7 and 14 days determined a significantly increased expression of Runx2 at both mRNA and protein levels; the increase was~40% as compared to its expression in VIC grown in NM ( Figure 5A,C). Quantitative Real-Time PCR experiments revealed that the transfection for 48 h of VIC, exposed previously (5 days) to HGOM, with V-LPP/shRunx2 determined a significant reduction (~35%) of Runx2 gene expression ( Figure 5A). Transfection of VIC with polyplexes C60-PEI/shRunx2 determined downregulation of Runx2 gene expression by~40% of the levels measured in HGOM-treated VIC, while the downregulation obtained using Scr-LPP/shRunx2 was~30%. The use of the shCTR plasmid encapsulated into VCAM-1 targeted or non-targeted lipopolyplexes did not affect the level of Runx2 gene expression. Western blot assays revealed that at 48 h after transfection of five-days HGOM-activated VIC with V-LPP/shRunx2, the protein expression of Runx2 was reduced by~40% ( Figure 5B). When transfection was performed using Scr-LPP/shRunx2 and C60-PEI/shRunx2, the protein level of Runx2 was reduced bỹ 25% and 35%, respectively. The use of shCTR plasmid did not affect the level of Runx2 protein ( Figure 5B). At 48 h after the second transfection of HGOM-exposed VIC with V-LPP/shRunx2 (day 14 in culture), the mRNA Runx2 expression decreased significantly by~80% ( Figure 4C). The double transfection with Scr-LPP/shRunx2 determined a reduction in Runx2 gene expression by~30%. The use of C60-PEI/shRunx2 for double transfection, induced the same decrease of Runx2 gene expression at day 14 as at day 7, after one transfection, bỹ 40%. No statistically significant inhibition was obtained when lipopolyplexes containing control shCTR plasmid were employed. Runx2 level in HGOM-activated VIC subjected to double transfection in the 5th and 12th days, and measured 48 h after the second transfection (14th day). The results were normalized to β-actin and represent the mean ± S.D. of two independent experiments made in duplicate (n = 4) and represent fold change relative to HGOM condition (considered as 1). The samples considered statistically different were marked with * p < 0.05, ** p < 0.01, *** p < 0.001 when compared to HGOM; # p < 0.05, ## p < 0.01, ### p < 0.001 when compared to NM; & p <0.05 when compared to V-LPP/shRunx2. V-LPP/shRunx2 Lipopolyplexes Reduce the Expression of Osteoblast-Specific Differentiation Markers in Osteogenic-Differentiated VIC To find out whether the downregulation of the transcription factor Runx2 has a functional effect, we determined the gene and protein expression of osteogenic molecules, namely OSP, BSP, and BMP-2 in VIC exposed to HGOM and transfected with V-LPP/shRunx2 lipopolyplexes. In control experiments, exposure of VIC (7 days) to HGOM led to a significant increase (~40%) in OSP, BSP, and BMP-2 gene expression ( Figure 6A,C,E). Transfection for 48 h of oVIC with V-LPP/shRunx2, Scr-LPP/shRunx2 lipopolyplexes, or C60-PEI/shRunx2 polyplexes led to a decrease in mRNA that was similar for OSP, BSP (~40%) and BMP-2 (~35%) ( Figure 6A,C,E). No reduction in OSP, BSP, and BMP-2 gene expression was found in the cells transfected with the shCTR plasmid encapsulated either in VCAM-1 targeted (V-LPP/shCTR) or non-targeted (Scr-LPP/shCTR) lipopolyplexes. A higher percent reduction in OSP protein expression was determined when osteogenicdifferentiated VIC (oVIC) were treated for two days with V-LPP/shRunx2 (~55%) compared with treatment with Scr-LPP/shRunx2 and C60-PEI/shRunx2 polyplexes when a decrease of~30-35% was obtained relative to OSP protein expression in non-treated oVIC ( Figure 6B). Also, the transfection with lipopolyplexes carrying the shCTR plasmid, either VCAM-1 targeted or non-targeted, did not reduce the OSP protein expression in oVIC. Although the exposure of VIC to HGOM for 7 days determined a significant increase in BSP and BMP-2 mRNA levels, we found no significant increases of these osteogenic molecules at the protein level ( Figure 6D,F), a result in line with our previous data showing an increase in BSP expression after 14 days of VIC exposure to HGOM [9]. The transfection of oVIC, on the fifth day of exposure to HGOM, with either lipopolyplexes or polyplexes does not influence the protein expression of BSP. Instead, a significant reduction in BMP-2 protein expression was determined by transfection of oVIC with V-LPP/shRunx2 and C60-PEI/shRunx2 (~50% in both cases). No effect on BMP-2 protein expression was detected in the case of oVIC transfection with Scr-LPP/shRunx2 and lipopolyplexes encapsulating the shCTR plasmid. The second transfection of oVIC with V-LPP/shRunx2, Scr-LPP/shRunx2, and C60-PEI/shRunx2 on the twelfth day of VIC exposure to HGOM maintained a decreased level of mRNA for OSP expression (~30-35%) compared with non-treated oVIC ( Figure 6G). The percentages of inhibition of mRNA BSP level werẽ 50%,~30%, and~45%, whereas that of mRNA BMP-2 were reduced by~40%,~35%, and 65%, 48 h after the second transfection of oVIC with V-LPP/shRunx2, Scr-LPP/shRunx2, and C60-PEI/shRunx2, respectively ( Figure 6H,I). By comparison, the 2nd transfection using the control lipopolyplexes, V-LPP/shCTR and Scr-LPP/shCTR, had no effect on OSP, BSP, and BMP-2 gene expression that remained at similar levels as measured in untreated HGOM-exposed VIC at 14 days ( Figure 6G-I). (G-I) The gene expression of OSP, BSP, and BMP-2 in VIC exposed to HGOM for 14 days and subjected to a second transfection on the 12th day. As controls, transfection using V-LPP/shCTR and Scr-LPP/shCTR were employed. Results, normalized to β-actin, were expressed as mean ± S.D. of two independent experiments made in duplicate (n = 4) and represented as fold change relative to HGOM condition (considered as 1). The samples considered statistically different were marked with * p < 0.05, ** p < 0.01, *** p < 0.001 when compared to HGOM, # p < 0.05, ## p < 0.01, ### p < 0.001 when compared to NM. Lipopolyplexes Are Cyto/Hemocompatible The cytotoxicity of lipopolyplexes was determined by ToxiLight assay, by measuring the adenylate kinase (AK) release at 48 h after VIC were subjected to one or two transfections with lipopolyplexes and polyplexes on the fifth and twelfth day. The data were normalized and presented as fold change relative to cells exposed to HGOM medium for 7 and 14 days, in the absence of transfection with lipopolyplexes, considered as 1. Values greater than 1 signify that a compound is toxic to cells. The data indicated that the AK release was not increased by VIC incubation with lipopolyplexes, either after a single or a double transfection. The result indicated the cytocompatibility of the developed lipopolyplexes and suggested that they can be safely used to deliver the shRNA plasmid cargo to the cells ( Figure 7A). Hemolysis and erythrocytes aggregation techniques were used to evaluate the hemocompatibility of lipopolyplexes by ex vivo incubation with erythrocytes. The hemolysis induced by V-LPP/shCTR lipopolyplexes at various lipid concentrations (14 ÷ 140 nM, corresponding to plasmid concentrations between 4.5 ÷ 45 µg/mL plasmid shRNA and 20 ÷ 200 µg/mL C60-PEI) is presented in Figure 7B. All of the V-LPP/shCTR concentrations displayed a degree of hemolysis less than 3%, even at the highest concentration (which is equivalent to a dose of 45 µg plasmid). No erythrocytes aggregation was detected, the samples behaving similarly to the negative control ( Figure 7C). Discussion Aortic valve stenosis and calcification is an active cellular process occurring within the valve leaflet and involves pathological differentiation of the valvular cells. No pharmacological treatment to specifically slow down the progression of aortic valve calcification is currently known. VIC, the most abundant cell type in the aortic valve, plays an active role in valve calcification by acquiring an osteoblast-like phenotype in pathological conditions [20]. The Runx2 transcription factor is a key regulator of the VIC transition towards an osteoblastic phenotype as a response to various conditions such as diabetes, hyperlipidemia, or advanced age [21]. The exposure of cultured VIC to osteogenic factors, in particular dexamethasone, β-glycerophosphate, and ascorbic acid, increases the expression of Runx2 and promotes the osteodifferentiation of VIC [22]. Previously, we demonstrated that the association of osteogenic factors with high glucose concentrations has a synergic effect on Runx2 expression, accelerating the osteodifferentiation of VIC [9]. Since the shift of VIC from a fibroblast-like to an osteoblast-like phenotype (oVIC) is a critical step in the development and progression of aortic valve calcification, these cells represent a promising target for pharmacological intervention. We have previously demonstrated that Runx2 silencing in oVIC, using C60-PEI/shRunx2 polyplexes, mitigates the osteodifferentiation of VIC [9]. Here, our attempt was to develop a targeted delivery system suitable to provide shRNA sequences specific for Runx2 downregulation to the aortic valve cells once injected in vivo. We report now the design, preparation, and characterization of functional VCAM-1 targeted PEGylated lipopolyplexes encapsulating C60-PEI/shRunx2 polyplexes (V-LPP/shRunx2). The VCAM-1 was chosen as an appropriate target on the aortic VIC surface based on results demonstrating an increased VCAM-1 expression in cultured VIC activated with different stimuli or in the aortic valve of diabetic ApoE-deficient mice [17,18,23]. In this study, we demonstrated the presence of VCAM-1 on the surface of VIC isolated from non-calcified leaflets of the human aortic valve grown in NM. Exposure of the cells to HGOM leads to a significantly increased VCAM-1 expression. The lipopolyplexes are composed of a "core" of polyplexes, made from nanoconjugates C60-PEI complexed with shRNA plasmid, surrounded by a "shell" consisting of a lipid bilayer, as revealed by TEM images. Using small-angle X-ray scattering (SAXS) would be of help to obtain a complete characterization of the structural organization of nanoparticles [24]. The V-LPP/shRunx2 were physicochemically characterized, and the results indicated a stable aqueous dispersion of particles with an average dimension of~200 nm and a negative Zeta-potential (~−30 mV) due to the anionic lipid bilayer shell. No significant changes in the size and Zeta-potential of V-LPP/shRNA lipopolyplexes suspended in PBS were observed, whereas C60-PEI/shRNA polyplexes rapidly aggregate within 15 min. The results point out the colloidal stability of the lipopolyplexes and are in line with previous studies [11][12][13]. When systemically injected, the interaction of non-PEGylated nanoparticles with electrolytes causes their aggregation [25]. Indeed, C60-PEI/shRNA polyplexes are prone to aggregation, a fact suggested by a 2-fold increase in size when incubated in 0.9% NaC l , the concentration found in the bloodstream. By contrast, the PEGylated V-LPP/shRNA lipopolyplexes were not affected by the interaction with electrolytes, and no modification of their sizes occurred when exposed to NaCl concentrations equal to or higher than 0.9%. Thus, we can safely conclude that the colloidal and electrolytes stability of lipopolyplexes make them suitable for systemic in vivo administration. We have shown a specific intracellular uptake of VCAM-1 targeted lipopolyplexes by VIC grown in either NM or HGOM, mediated by the cell adhesion molecule VCAM-1, as demonstrated by the experiments done in the presence of an excess V-BP. The increased level of VCAM-1 expression on the surface of HGOM-exposed VIC led to a higher internalization of the fluorescently-labeled plasmid cargo encapsulated into VCAM-1 targeted lipopolyplexes compared to their internalization by VIC grown in NM. The data support and extend previous studies showing VCAM-1 as a suitable target for the specific binding and internalization of nanoparticles by the endothelial cells [26][27][28][29], and emphasize the commonality of specific nanoparticles uptake by other cell types expressing VCAM-1. Then, we questioned the direct therapeutic effect of V-LPP/shRunx2 on oVIC and found that both, Runx2 mRNA and protein levels in HGOM-exposed VIC were significantly decreased at 48 h after transfection with V-LPP/shRunx2. Two transfections were performed on the 5th and 12th days of VIC exposure to HGOM, and after the second transfection, a higher reduction of Runx2 mRNA level was achieved when V-LPP/shRunx2 were used compared to the other formulations of shRNA-Runx2 plasmid. Yet, after the first transfection, both non-targeted formulations of shRunx2 plasmid (Scr-LPP/shRunx2 and C60-PEI/shRunx2) reduce the Runx2 expression. This could indicate a non-specific cellular internalization due to a long incubation time of oVIC with nanoparticles [30]. It may be that the non-specific endocytosis of Scr-LPP/shRunx2 lipopolyplexes and C60-PEI/shRunx2 polyplexes provide the necessary amount of plasmid to reduce Runx2 expression in oVIC. However, after the second transfection, a statistically significant difference in the reduction of Runx2 mRNA expression between targeted (V-LPP/Runx2) and non-targeted (Scr-LPP/shRunx2 and C60-PEI/shRunx2) formulations. was determined. Also, the Runx2 mRNA level is lower than that measured at seven days, after the first oVIC transfection with V-LPP/shRunx2, while, after cells transfection with Scr-LPP/shRunx2 and C60-PEI/shRunx2, the level of Runx2 mRNA was kept constant. A possible explanation of the reduced Runx2 mRNA expression obtained with targeted lipopolyplexes after the second transfection may be their higher cellular internalization, mainly by receptor-mediated endocytosis by clathrin-coated pits, as reported [26] and, also, by non-specific endocytosis compared to non-specific uptake of Scr-LPP/Runx2 and C60-PEI/shRunx2 that may become saturated. Another explanation may be the different intracellular processing of VCAM-1 targeted lipopolyplexes and non-targeted lipopolyplexes and polyplexes after internalization that affects the shRNA plasmid functionality. However, it should be noted that both Scr-LPP/shRunx2 and C60-PEI/shRunx2 produce about the same level of Runx2 expression downregulation despite their different nature. Non-targeted lipopolyplexes are PEGylated and negatively charged. Instead, C60-PEI/shRunx2 polyplexes are non-PEGylated and have a positive Zeta potential. Therefore, it is expected that different internalization mechanisms are involved in each case. Nonetheless, in our experiments, irrespective of the mechanism involved in the internalization of nanoparticles, the same effect translated into the RNAi was obtained. Further investigation, using inhibitors of endocytic pathways is needed to search into the internalization pathways involved in the process. Importantly, the downregulation of Runx2 using VCAM-1 targeted lipopolyplexes for RNAi causes a consequent reduction in gene and protein expression of osteogenic molecules OSP, BSP, and BMP-2. The data point to the functional role of V-LPP/shRunx2 in blocking the pathological process of human aortic VIC osteodifferentiation by Runx2 silencing. Compared to the polyplexes C60-PEI/shRunx2, which also produce downregulation of Runx2 and osteogenic molecules in cultured oVIC [9], there are two significant advantages of the newly developed VCAM-1 targeted lipopolyplexes: the suitability for in vivo administration and its potential to perform targeted delivery of shRunx2 plasmid to the diseased aortic valve. The cytotoxicity assay, determined by the amount of released adenylate kinase (AK), indicated that the treatment with lipopolyplexes did not cause toxicity in VIC either after a single or a double transfection. This result proposes the lipopolyplexes as biocompatible materials according to International Organization for Standardization, ISO 10993-5:2009 "Biological Evaluation of Medical Devices Part 5: Tests for in Vitro Cytotoxicity, 2009". Also, the hemocompatibility tests showed a comparable percentage of lysed erythrocytes in the case of incubation with V-LPP/shRunx2 or PBS, which was less than the threshold of 5% considered the safe hemolytic ratio for biomaterials (International Organization for Standardization (ISO) 10993-4:2017). In addition, examination of the erythrocytes after incubation with lipopolyplexes reveals a morphology similar to the negative control (PBS) at all investigated concentrations. These results illustrate the cyto-and hemo-compatibility of the developed lipopolyplexes and recommend them as safe vehicles for delivery of the cargo (shRNA plasmid) to the cells. To preclinically validate the therapeutic effect of this RNAi vector, the results of this study justify further testing, in appropriate animal models, of this targeted nano-delivery system designed to recognize a molecular target expressed by the diseased aortic valve. We intend to follow the localization of VCAM-1 targeted lipopolyplexes in the aortic valve and the therapeutic effect of V-LPP/shRunx2 in diabetes-induced changes in aortic heart valves in a murine model of atherosclerosis, developed previously [18]. Reagents The commercial sources of the main reagents and consumables used in this study were as follows: Human VIC Isolation and Culture Primary human VIC were harvested from non-calcified cusps (or portions of the cusp) of the aortic valve obtained from a patient who underwent surgical valve replacement as previously described [31]. The surgery was performed at Central Military Hospital "Dr. Carol Davila", Cardiovascular Surgery Clinic, Bucharest, according to the Declaration of Helsinki for experiments involving human samples [32]. The patient signed the informed consent forms and his anonymity and privacy rights were respected. VIC were cultured on 1% gelatin-coated plates and grown in DMEM 5.5 mM, supplemented with 10% fetal bovine serum, 50 µg/mL neomycin, 100 UI/mL penicillin and 100 µg/mL streptomycin (normal medium, NM), in a humidified 5% CO 2 incubator at 37 • C. To induce the VIC activation and osteodifferentiation, the cells were exposed to a medium containing 25 mM glucose (HG) and osteogenic factors (50 µg/mL ascorbic acid, 10 mM β-glycerophosphate, 10 nM dexamethasone) (high glucose osteogenic medium, HGOM) as previously reported [9]. The Ethics Committee of the Institute of Cellular Biology and Pathology "Nicolae Simionescu" has approved the study. VCAM-1 Expression in VIC VIC were seeded in 24-well plates at a density of 50,000 cells/well. After 24 h, VIC were incubated in NM (used as control) or HGOM for 1, 2, or 7 days. Then, the cells were processed for flow cytometry analysis using a previously described protocol [33]. The quiescent and HGOM-activated VIC were detached from the culture plates, centrifuged, and resuspended in FACS buffer (0.5% PFA in PBS) followed by incubation with the primary antibody anti-VCAM-1 (1:25) for 1 h on ice. After washing in FACS buffer, cells were incubated with the secondary Allophycocyanin (APC)-conjugated goat anti-mouse IgG-antibody (1:250) for 1 h, on ice. Next, the cells were analyzed by a flow cytometer in the FL6 channel (660 nm), after excitation with the red laser (633 nm) (Gallios, Beckman Coulter, Brea, CA, USA). Data were analyzed with Kaluza Flow analysis software (v.2.1) (Beckman Coulter, Brea, CA, USA). Preparation of Lipid-Enveloped C60-PEI/shRNA Polyplexes (Lipopolyplexes) The lipopolyplexes (LPP) were prepared employing the reverse-phase evaporation method as previously described [34]. First, the C60-PEI/shRNA polyplexes were obtained as described in our previous paper [9] by mixing the C60-PEI and shRNA plasmids at a N/P ratio of 25. Each constituent was diluted separately to achieve the appropriate concentration in the same volume of 2 × HB buffer (20 mM HEPES, 10% D-glucose, pH = 7.4) and brought to 1000 µL with HB buffer (10mM HEPES, 5% D-glucose, pH = 7.4). The MISSION ® shRNA Plasmid DNA targeting human Runx2 gene (shRunx2) (Sigma-Aldrich cat. no. SHCLND-NM_004348, clone TRCN0000013653), validated by us for silencing human Runx2 [9], and MISSION ® pLKO.1-puro non-Mammalian shRNA Control Plasmid DNA (shCTR), as a control plasmid, were used to obtain polyplexes at N/P = 25. The plasmids were amplified in Escherichia coli host strain DH5α and isolated using GenElute-Plasmid Midiprep kit (Sigma-Aldrich, Germany). The N/P ratio is the ratio of nitrogen atoms in C60-PEI to phosphorus atoms in DNA and was calculated using the nitrogen percentage resulting from elemental analysis XPS of C60-PEI (16.6% N) [9,35]. Then, 3 mM anionic DOPG diluted in 4.5 mL chloroform/methanol (2:1, v/v) was added to the preformed cationic C60-PEI/shRNA polyplexes. After incubation for 30 min at room temperature (RT), 1.5 mL of chloroform, and 1.5 mL of distilled water were added. After phase separation by centrifugation at 830× g for 7 min, the aqueous phase was removed, and 6.8 mM POPC, 0.1 mM Mal-PEG-DSPE, 0.1 mM PEG-DSPE (dissolved in chloroform), and 1.5 mL of HB buffer were added to the organic phase, containing inverted micelles encapsulating polyplexes, in a round bottom glass bottle, vortexed vigorously and sonicated for 1 min. The chloroform was removed by evaporation under vacuum on a rotary evaporator (Laborota 4000, Heidolph, Schwabach, Germany), at 37 • C, and lipid-coated polyplexes (lipopolyplexes) were obtained. The aqueous dispersion was extruded several times through 200 nm and 100 nm polycarbonate membranes using a hand extruder from Avanti Polar Lipids (Alabaster, AL, USA) to obtain lipopolyplexes containing shRNA-Runx2 plasmid (LPP/shRunx2) or shCTR plasmid (LPP/shCTR), uniform in sizes. Coupling of VCAM-1 Binding Peptide to Lipopolyplexes Cysteine-bearing VCAM-1 binding peptide (NH2-VHPKQHRGGSKGC-COOH) or scrambled peptide (NH2-HVKHRQPGGSKGC-COOH) were coupled to lipopolyplexes resulting in VCAM-1 targeted (V-LPP) and non-targeted LPP (Scr-LPP). First, the peptides were incubated with a reducing agent (TCEP) for 2 h at room temperature, to break the disulfide bonds. The excess TCEP was removed by dialysis overnight at 4 • C against coupling buffer (10 mM Na 2 HPO 4 , 10 mM NaH 2 PO 4 , 2 mM EDTA, 30 mM NaCl, pH = 6.7) using a dialysis membrane of 500-1000 Da. The peptides were added to LPP and mixed overnight at 4 • C to form the bonds between cysteine in the carboxy-terminal ends of peptides and maleimide in the lipid bilayers. To saturate the uncoupled maleimide groups, the LPP were mixed with 1 mM L-cysteine for 30 min at room temperature. Next, the LPP were centrifuged using Amicon centrifugal filter units of 100 kDa in order to separate the free peptide from peptide-coupled LPP. The amount of VCAM-1 recognizing peptide coupled to the surface of LPP was quantified indirectly by ultrahigh performance liquid chromatography (UHPLC), measuring the amount of peptide that remained uncoupled, as described previously [36]. The schematic representation of the preparation procedure of VCAM-1 targeted lipopolyplexes is shown in Figure 1. Hydrodynamic Diameter and Zeta Potential The size and Zeta (ζ)-potential of LPP were determined after a 1:1000 dilution in distilled water by dynamic light scattering (DLS) and electrophoretic light scattering (ELS), respectively on a Zetasizer Nano ZS (ZEN 3600, Malvern Instruments, Malvern, UK). For size, an average of 13 measurements per sample was used for an individual record and for each sample, three records were acquired. The Zeta potential was determined using a Zeta dip cell (ZEN 1002) immersed into the sample, by running three consecutive records as an average result of 13 measurements, at 5 Volts with 300 s delay between measurements. The results were analyzed with the built-in Zetasizer Software version 7.12 (Malvern Instruments, Malvern, UK). Negative Staining Transmission Electron Microscopy (TEM) Five µL of the V-LPP/shRunx2 (diluted 1:10 with water) were deposited on 200 mesh formvar coated 3 mm copper grids (cat no. 2620C, SPI Supply, West Chester, PA, USA), previously treated with 0.01% poly-L-lysine for 2 min. Next, the excess liquid was blotted away, the samples were negatively stained by adding 10 µL of 1% uranyl acetate for 2 min. After the incubation period, the excess was removed by blotting and the samples were allowed to dry. The grids were analyzed with the Tecnai G2 Spirit BioTWIN Transmission Electron Microscope, equipped with a LaB6 filament (FEI Company, Thermo Fisher Scientific, Waltham, MA, USA). The TEM is equipped with a high-resolution camera FEI™ Eagle 2k CCD (FEI Company, Thermo Fisher Scientific), and was operated at an accelerating voltage of 100 kV with a beam current of 3 µA. Physical and Colloidal Stability of Lipopolyplexes The physical stability of LPP was determined by measuring their dimensions and ζ-potentials at fixed time intervals (1, 2, 3, and 4 weeks), and compared to the initially measured values. The colloidal stability of LPP in phosphate buffer saline (PBS) was assessed by the DLS method over 48 min, each record being made at an interval of 8 min. Moreover, to study the effect of electrolytes on the LPP stability, the LPP/shCTR lipopolyplexes were incubated with different concentrations of sodium chloride (0.9 ÷ 5% w/v) for 1 h at 37 • C followed by particle size determinations. For comparison, the colloidal stability and electrolyte-induced flocculation of C60-PEI/shCTR polyplexes, as the core of the LPP were determined. Encapsulation Efficiency The content of the encapsulated shRNA plasmids in the formed LPP was determined using the Quant-iT™ PicoGreen ® dsDNA kit (ThermoFischer Scientific cat. no. R11490). The fluorescent reagent stains nucleic acids for quantitating them in solution. Briefly, 40 µL of LPP were lysed in water containing 10% Triton X-100 and heparin 35 U.I. for 40 min at 37 • C. Separately, a shRNA plasmid curve of known concentrations was made in TE buffer (10 mM Tris-HCl, 1 mM EDTA, pH = 7.5), according to manufacturer instructions. The samples were incubated with 100 µL of Quant-iT PicoGreen reagent for 5 min at room temperature, protected from light. The sample fluorescence was measured using a TECAN Infinite M200Pro (Tecan Group Ltd., Männedorf, Switzerland) at excitation of 480 nm and emission of 520 nm. The encapsulation efficiency was calculated as the ratio between the amount of shRNA plasmid encapsulated into LPP and the total amount of shRNA plasmid added initially according to Equation (1). Also, the loading of shRNA plasmid into LPP was expressed as µg shRNA plasmid/µmoL lipid. Determination of the Uptake of VCAM-1 Targeted Lipopolyplexes by VIC VIC were seeded in a normal culture medium (NM) in 48-well plates, at a density of 7.000 cells/well. After 24 h the cells were incubated in HGOM medium for 5 days. Then, the cells were incubated for 24 and 48 h with VCAM-1 targeted lipopolyplexes encapsulating polyplexes formed between C60-PEI and a Cy3-labelled plasmid at N/P = 25 (V-LPP/Cy3) at a concentration of 0.3 µg DNA plasmid/well. To investigate the specificity of VCAM-1 targeted lipopolyplexes, binding, and uptake by HGOM-exposed VIC, competitive studies, in the presence of excess VCAM-1 binding peptide (V-BP), were performed. VIC were preincubated for 10 min with a 25-fold higher concentration of V-BP as compared with the peptide coupled to the surface of lipopolyplexes, before incubation with V-LPP/Cy3. After washing with PBS, VIC were examined by fluorescence microscopy (Olympus IX81 microscope equipped with tetramethylrhodamine (TRITC) filter). Also, the cells were detached from the dishes and analyzed by flow cytometry (Gallios Flow Cytometer, Beckman Coulter, Brea, CA, USA), using blue laser excitation at 488 nm and emission at 585/42 nm in the FL2-H channel. Data were processed using Kaluza Flow analysis software (v.2.1). VIC Transfection with V-LPP/shRunx2 Lipopolyplexes VIC were seeded in a 24-well plate at a density of 35.000 cells/well and after 24 h were exposed to NM or HGOM medium. On the 5th and the 12th day after exposure to HGOM, VIC were subjected to transfections with VCAM-1 targeted or non-targeted lipopolyplexes encapsulating the shRNA plasmid with specificity for Runx2 (V-LPP/shRunx2 and Scr-LPP/shRunx2, respectively) and C60-PEI/shRunx2 polyplexes. As negative controls for RNA interference, lipopolyplexes V-LPP/shCTR, and Scr-LPP/shCTR formed with the MISSION ® pLKO.1-puro non-mammalian shRNA control plasmid DNA (shCTR) with no homology to known mammalian genes were used. A concentration of 1 µg shRunx2 or shCTR plasmid DNA/well was employed. At 48 h after each transfection, namely on the 7th and the 14th days, the cells were processed for Real-Time quantitative Reverse Transcription-Polymerase Chain Reaction (qRT-PCR) analysis and Western blot assay. Quantitative RT-PCR Total RNA was isolated at 48 h after incubation of VIC with lipoplexes or polyplexes using TRIzol TM reagent. The RNA concentration was determined with the Spectrophotometer NanoDrop 1000 (ThermoFischer Scientific, Waltham, MA, USA). One µg of total RNA was used for the synthesis of cDNA using Moloney Murine Leukemia Virus (M-MLV) reverse transcriptase according to the producer's protocol (Invitrogen, ThermoFischer Scientific, Waltham, MA, USA). The amplification of cDNA was performed for 42 cycles in the following optimized conditions: 2.5 mM MgCl 2 , annealing at 60 • C, and extension at 72 • C using LightCycler 480 Real-Time PCR System from Roche (Basel, Switzerland). The Runx2, OSP, BSP, and BMP-2 expressions were normalized to ACTB (β-actin) expression, and fold changes, relative to HGOM condition, were calculated using the 2-∆∆CT method. The sequences of the primers used for analyzing the human genes of interest are given in Table 2. Western Blot Assay After exposure to HGOM medium for five days and after transfection with different lipopolyplexes (V-LPP/shRunx2, Scr-LPP/shRunx2, V-LPP/shCTR, and Scr-LPP/shCTR) and C60-PEI/shRunx2 polyplexes for 48 h, VIC were subjected to Western blot assay. The cells were washed with cold PBS and lysed in radio-immunoprecipitation assay (RIPA) buffer. The protein level in cell lysates was quantified by bicinchoninic acid assay following the manufacturer's instructions (Sigma-Aldrich, Merck KgaA, Darmstadt, Germany). After quantifying the total protein concentration, 30 µg/lane of cell protein extracts were separated by 5-15% gradient SDS-PAGE gels. After transferring onto nitrocellulose membranes using a Trans-Blot Semi-Dry system, the blots were probed with the following appropriate primary antibodies: rabbit anti-Runx2 (1:200), rabbit anti-osteopontin (1:1000), goat anti-BSP (1:1000) and rabbit anti-BMP-2 (1:500). After washing, the blots were incubated with the appropriate secondary antibodies at RT for one hour. The membranes were, then, incubated with the chemiluminescent substrate and visualized with ImageQuant Las 4000. The densitometry of the bands was determined with ImageJ software developed at the National Institutes of Health (NIH, USA), and the results were normalized to β-actin, then calculated as fold change versus HGOM. The data were expressed as mean ± S.D. (standard deviation) of two experiments performed in duplicates. Evaluation of Lipopolyplexes Cytotoxicity Cytotoxicity was assessed using ToxiLightTM Cytotoxicity BioAssay Kit (Lonza cat. no. LT17-217), as previously reported [37]. This method measures the release of adenylate kinase (AK) from damaged cells in the culture medium. VIC were plated at a density of 50,000 cells/well in a 24-well plate and incubated with HGOM medium for 7 and 14 days, with a medium change every two days. On the 5th and 12th days, the cells were incubated with V-LPP/shCTR, Scr-LPP/shCTR, or with C60-PEI/shCTR polyplexes. At 48 h after incubation with lipopolyplexes and polyplexes (7th and 14th days in culture), the culture medium was collected for further determinations. For quantification of the released AK, 25 µL of medium were added to a 96-well plate and incubated with 100 µL AK detection reagent for 5 min at RT. The luminescence was measured at 1 s on a Mithras LB 940 instrument (Berthold Technologies GmbH & Co. KG, Oak Ridge, TN, USA). The data were normalized to cells grown in HGOM medium, considered 1, and were expressed as mean ± S.D. (standard deviation) of two experiments made in quadruplicate. Hemocompatibility Assay The hemocompatibility of V-LPP/shCTR was investigated by measuring the hemolysis and erythrocyte aggregation induced by their incubation in the presence of lipopolyplexes as previously described [6]. Blood samples were collected from a C57BL/6J mouse (12-week-old male; Stock No: 000664, The Jackson Laboratory) in EDTA-containing tubes and centrifuged at 1000× g for 15 min. The resulting plasma was collected separately, and the erythrocytes pellet was diluted 1:10 in PBS, pH = 7.4, containing different concentrations of V-LPP/shCTR lipopolyplexes ranging from 14 nM to 140 nM lipids (corresponding to plasmid shCTR concentration between 4.5 and 45 µg/mL and C60-PEI concentration between 20 and 200 µg/mL) and incubated at 37 • C for 1 h. The concentrations were calculated to imitate the i.v., administration of lipopolyplexes in blood, considering that 1 mL of blood contains approximately 450 µL erythrocytes. The samples (in triplicate) were then centrifuged at 1000× g for 15 min to sediment intact erythrocytes and the supernatants, containing the released hemoglobin, were transferred to a flat-bottom 96-well plate and measured at 540 nm using the TECAN Infinite M200Pro instrument. The incubation of erythrocytes in PBS and in 0.5% Triton X-100 (considered 100% hemolysis), were used as negative and positive controls, respectively. The percentage of hemolysis was calculated using the following equation: % Hemolysis = Absorbance of sample − Absorbance of negative control Absorbance of positive control − Absorbance of negative control × 100 The sedimented erythrocytes were resuspended in PBS, placed on glass slides, and examined for aggregation using an Olympus IX81 light microscope. Statistical Analysis The results were expressed as mean ± standard deviation (S.D.) and the experiments were performed in duplicate, triplicate, or quadruplicate. Statistical analyses were performed using GraphPad Prism 7 software version 9.2.0 (332) (GraphPad Software, La Jolla, CA, USA). The statistical differences were calculated with an unpaired two-tailed t-test for the comparison of two groups or one-way ANOVA with multiple comparisons posthoc Tukey test for comparison of three or more groups. Statistically significance of differences: * p < 0.05, ** p < 0.01, *** p < 0.001. Conclusions To downregulate the Runx2 expression in activated VIC, we have designed and obtained targeted nanocarriers, namely lipopolyplexes consisting of VCAM-1 targeted lipid bilayer-encapsulated C60-PEI/shRunx2 polyplexes for specific delivery to oVIC. The V-LPP/shRunx2 lipopolyplexes are cyto-and hemocompatible and are specifically taken up by oVIC. These lipopolyplexes are functional in the downregulation of Runx2 expression and the subsequent significant decrease in the expression of other osteogenic molecules (OSP, BSP, BMP-2) in oVIC. The newly developed specific molecule-directed lipopolyplexes represent a promising targeted therapeutic RNAi-based strategy for CAVD by hindering the osteodifferentiation of aortic VIC exposed to pathological stimuli. Institutional Review Board Statement: The study was approved by the Ethics Committee of the Institute of Cellular Biology and Pathology "Nicolae Simionescu" (approval no. 16/14.09.2016). Informed Consent Statement: Not applicable. Data Availability Statement: Not applicable.
2022-04-03T16:01:12.985Z
2022-03-30T00:00:00.000
{ "year": 2022, "sha1": "12f208d998a326f31b21834363aaf7c6cf0bad18", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/23/7/3824/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f29589ab4ea1d36307ead440b9ad967ccbd5cb48", "s2fieldsofstudy": [ "Biology", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
122041953
pes2o/s2orc
v3-fos-license
Green function study of quantum transport in ultra-small devices with embedded atomistic clusters Transport in limiting scale MOSFET transistors will be strongly influenced by quantum effects and the presence of atomistic scattering centres either intentionally or un-intentionally present in the channel and the device environs. The scattering in such systems is non-asymptotic and the selfaveraging conditions of the Kohn-Luttinger theorem fail so that a self-energy for impurity scattering does not exist. Atomistic scattering must therefore be treated non-perturbatively. Previously it has been shown that quantized micro-vortices may occur at definite energies in the current flow contributing to both the blocking effect and to effective mobility. The present study uses the Glasgow and NASA NEGF simulators to study vortex formation and tunnelling through small clusters of atomistic impurities arranged with various configurations within the 5 nm wide by 12 nm long channel of a Double Gate MOSFET. The I-V characteristics and the threshold voltage are severely affected by the distribution of the charges in the channel. A variety of different geometry atomistic clusters have been studied. Examination of the energy dependent current density allows an evaluation of the admixture of strong quantum flows such as micro-vortices to the net current. It is found that the threshold voltage and conductance are strongly dependent on the impurity configuration. The I-V characteristics are monotonic in most cases due to the strong thermal smoothing that prevents resolution of the mode structure. Introduction The need to model ultra-small silicon MOSFET devices in the hypothesised quasi-ballistic regime has focussed attention on schemes based on quantum transport methodology. The deepest advances have been made by the application of the non-equilibrium Green function (NEGF) formalism to semiconductor devices 1-5 . The NEGF formalism as applied to devices 2 is based on infinite-order perturbation theory that leads to a set of coupled integro-differential equations for the various Green functions determined by the appropriate self-energy functions. The open system problem is circumvented by selecting the Green function for the finite device region by folding the coupling to the contacts into a boundary-controlled selfenergy. Although this makes the problem just tractable, the existence of the standard equations, in particular the collisional self-energies, depends especially on the assumption that the scattering perturbations are self-averaging. This is reasonable for coupling to the large phonon bath but the conventional structure of the self-energy for impurity scattering only exists if the self-averaging ansatz of Kohn and Luttinger 6 , Luttinger 7 , holds true. Unfortunately, for finite small devices the microscopic discrete distributions of impurities in the channel and source and drain regions are non-self-averaging random variables [8][9][10][11][12][13][14][15] . Classical studies have shown that these are important for contributing strong fluctuations in the device parameters such as threshold voltage and effective channel mobility [16][17] . In the present paper we discuss the consequences of non-self-averaged scattering on finite atomistic clusters of impurities using the NEGF formalism adapted to treat impurity scattering nonperturbatively. In a companion paper we present an extension of the formalism to interface roughness scattering in ultra-small MOSFETs. Break-down of the Kohn-Luttinger ansatz Consider the total scattering potential for a randomly distributed system of N I identical discrete impurities (typically fixed screened Coulomb centres) as: The Fourier transform of eqn. (1) yields a convenient extraction of the Fourier transform of the particle density of the impurities ρ I (q) : The crucial self-averaging property of large numbers of such randomly distributed impurity potentials was established by Kohn and Luttinger 6 , Luttinger 7 for uniform random distributions. An extension to more general distributions is discussed in Elliot et al 18. The standard perturbation expansion 7 of the Green functions for electrons interacting with the random impurity array involves products of the form F I (q 1 ,q 2 ,...q n ) = ρ I (q 1 )ρ I (q 2 )...ρ I (q n ) (4) In conventional Green function theory the structure factors F I (q 1 ,q 2 ,...q n ) are averaged over a random ensemble. As an example, for large numbers of impurities, N I >>1, the Kohn-Luttinger ansatz asserts that we may replace expression (3) by i.e the sum over random impurities is zero unless q=0, when the generally complex sum becomes a real number equal to the number of impurities N I . In a recent study one of us 9, 11 examined over 100 sequences of uniform random distributions of impurities at different doping densities. It is found that condition (5) only holds for N I >1000 impurities. For smaller numbers the structure factor is complex; the imaginary part is non-zero for most values of q. In particular for qd>1 , where d is the mean separation between impurities, there will be interference between the scattering from the individual impurity potentials. Figure 1 shows the function K(q) =| e −iq.r i j=1 N I | /N I for 10, 100, and 1000 impurities randomly distributed in a 50 nm side box. The function is plotted in the q z = 0 plane. The wave vector limits are at ±π /10 nm −1 . Self-averaging occurs approximately for N I >1000, when K(q) → δ q,0 . For atomistic devices, where N I <1000, the Kohn-Luttinger ansatz and standard infinite-order perturbation theory fail: the Green function expansion G in Figure 2 is not equivalent to the ensemble average <G> and a self-energy cannot be defined. As a consequence the standard gain-loss NEGF transport equations are not valid for impurity scattering which must instead be treated non-perturbatively. The nature of the scattering for non-self-averaging configurations may be seen by using the T-matrix expansion of the Green function (retarded): Where the full T-matrix may be expanded in terms of the individual t-matrices for the j=1…N impurities. The leading term in (8) describes the scattering off each impurity; in non-self-averaging systems it produces interference effects due to the final superposition of each scattered wave. The second term describes scattering on one impurity followed by propagation and scattering off a second impurity: pair-wise multiple scattering. Both terms are significant in atomistic devices. Indeed, the leading term gives rise to strong interference effects; the matrix element for T for example becomes: It is clear that if the interference term did not occur the scattering cross section would be just N I times the cross section for single impurity scattering. For qd <<1, the interference term sums to N I −1, yielding a cross section that is N I 2 bigger than the single impurity case. For qd >>1, the interference term oscillates very rapidly and approximately cancels out when the number of impurities is very large. We then obtain the self-averaging limit: the cross section is then N I times larger than the individual impurity case, which is the assumption used in deriving quantum relaxive transport (self-energy exists) or Boltzmann-Bloch transport. Double gate device model For finite open coherent atomistic systems the long sequences of incoherently repeated collisions encountered in the limit of large numbers of impurities cannot occur. The dominant effect on transport will come from the simple coherent superposition of scattered waves from each impurity and also the incident waves entering from the source and any reflected waves at the drain. Previously, approximate analytical models were developed [8][9] for the scattering Green function in an open finite atomistic device using hard sphere scattering models. It was shown that the net flow in a steady pure state of constant energy is a meandering open flow from source to drain, enclosing the few impurities and several localised micro-vortex flows. The latter arise particularly from the leading interference terms in (8). In the following we show that this result is quite general. Here, we investigate, the self-consistent current density in a realistic short channel double gate silicon MOSFET device (figure 3) containing just 3 unintentional discrete dopants in the channel. The spatial configuration is varied to investigate the dependence of both the flows and the device performance on the spatial geometry. The layout of the simulated double gate MOSFET device is shown in figure 3. The device has metal gate contacts and 1nm oxide thickness. The channel length is 12 nm that allows a ballistic approach.; the channel body thickness is 5nm; the doping in the source and drain regions, N D , is 10 20 cm -3 , and the channel doping, N A , is 10 14 cm -3 . The reservoir alignment was varied from well-matched (source and drain cross section = channel cross-section) to wide reservoir. The device is simulated at 300K. Modelling The results are presented for NEGF modelling of the device assuming an anisotropic effective mass Hamiltonian H b for each of N b independent valleys: Figure 4: energy iso-surfaces for the electron valleys in silicon. The anisotropic effective mass ratios (silicon) are: 0.19 and 0.98. The silicon effective mass ratio in the isotropic model is taken as 0.3283. Other data correspond to reference 3. In (10), φ(R) = φ(r,z) is the local electrostatic potential obtained self-consistently from Poisson's equation applied to the full device geometry and continuous doping distributions including the fields of the three unintentionally discrete dopant (Coulombic) impurities. The impurity and electrostatic potentials are treated non-perturbatively. The simulation is very compute-intensive for 3D models. Instead, we assume the double gate device is very wide in the z-direction, rendering the computation effectively two-dimensional. As a consequence the 3D description of the atomistic impurities corresponds to long line charges perpendicular to the x-y plane. The limitations of the 2D simulation are discussed further in the final section. The simulation is based on the self-consistent recursive thermodynamic Green function method, outlined elsewhere 3 , using a suitable discrete spatial grid. The computations yield the total charge density n(r)and total current density j(r) . The energy resolved charge density n(r;E) and current density j(r; E)are also computed at interesting energies E. The ratios v(r) = j(r) /n(r), v(r; E) = j(r, E) /n(r; E), define the total velocity field and energy-resolved velocity field that are useful for examining the quantum hydrodynamics of the flows. As an example, the valley-resolved electron density is determined from the valley Green functions as: Poisson's equation is Equation (12) augmented for the point impurities is solved self-consistently with the Green functions. Atomistic configurations The simulations are based on four configurations of three (repulsive Coulombic) atomistic dopants as shown in figure 5. Configuration H is a horizontal strongly localised configuration that acts to divide the channel; the configuration V is a vertical configuration that acts to strongly block the channel; the configuration S is axially symmetric that classically would allow current flow between the forward impurities; finally, T is an asymmetric arrangement that blocks the flow on the lower region of the channel. Figure. 6 shows the calculated potential distribution for the four different configurations (H, V, S, T) of three dopants for gate voltage V G =1.5 V and drain voltage V D = 0.6 V. The source and drain are located at the left and right of every panel. The two gates are located at the top and the bottom of the panels. The spatial scale is stretched in the x-direction leading to some apparent distortion of the distributions. Figure 6 also shows the impurity potential peaks and the effective barriers presented to electrons incident from the source. The three pronounced peaks in the potential mark a region where current may form circulatory flows and where diffraction, interference and tunnelling may occur in the ballistic current due to the triangular configuration of impurities. The strong confinement couples with scattering on the discrete charges to create macroscopic quantum interference patterns that produce regions of low electron concentration where, classically, a high electron concentration is expected . At any given energy, an electron moves coherently through the channel and may tunnel through the potential barrier created by the charge configuration. This tunnelling current can produce significant differences in the current-voltage characteristics compared with semi-classical derivations (such as the densitygradient -drift diffusion method). Figure 7 shows the computed current-voltage characteristics corresponding to the four configurations (H, V, S, T) at V D = 0.6 V. Results are shown for the isotropic (full lines) and anisotropic (dashed lines) effective mass models. At sub-threshold the conductances dI/dV are very similar for the anisotropic mass model and the weaker isotropic model. However, the use of the realistic anisotropic mass model leads to a 200 mV shift in threshold voltage from the predictions of the isotropic model. The I-V characteristics and the threshold voltage are severely affected by the distribution of the charges in the channel. Conductances and theshold voltages show significant statistical spread. The spatial distribution of current The current density reveals interesting structure. The current density profile J y (x,y) at low gate voltage (V G =1.5 V) is shown in figure 8 for the S-configuration in the anisotropic mass model. It shows the majority of the current flow is between the two foremost impurities followed by bifurcation of the flow around the rear impurity. At very high gate voltage,V G =2.4 V, the current flow concentrates close to the gates in two parallel streams that avoid the dopants. The high voltage behaviour, shown in figure 9, does not occur for the isotropic mass model. There is significant tunnelling present that is enhanced by the low mass components. The full current density vector field J (x,y) is also shown in figure 8. Macro-vortices The related velocity fields, v(r) , and v(r;E)are useful for a quantum hydrodynamic picture of the flow. For steady flows it is simple to demonstrate that for pure or mixed states the streamlines do not cross and the topology of the flow is determined by hyperbolic and vortex centre singularities of the autonomous equation For pure states, the velocity field v(r;E) may contain quantised vortices at the strong nodes of the electron density n(r; E) [8][9][10][11][12][13][14][15]19 . The energy-resolved velocity fields that underlie the data for figure 8 show micro-vortices located within a meandering current. The vortices act to turn the current flow in space: they are essentially localised centres with well-defined angular momentum states. In closed systems, the localised centres may still occur but they are degenerate with clockwise and anti-clockwise circulation corresponding to the usual degenerate angular momentum states. The full velocity field v(r) shows a meandering flow very like the classical flow predicted for atomistic landscapes [16][17] but with additional structure following from the energy resolved vortices which are generally hidden in the total flow by the thermal superposition of states. Interestingly, despite the strong thermal suppression, it is easy to find atomistic configurations that display vortex flows in the total electron velocity field (classical-like idealised vortex) and indeed, the current density vector field (Rankine-like vortex). However, defining these macro-vortices are mixed state flows that show non-zero but not necessarily quantised circulation. Figure 10 illustrates a case-in-point: a double gate device with four unintentional dopants. For this particular (artificial) configuration the impurities form an effective offset cavity in the channel. The current density and velocity field show a strong macro-vortex in the flow. Figure 10. Total electron density, total current density J y and total current density vector field for four atomistic impurities embedded in a double gate device at 300 K. Transmission function profiles The quantisation of the transverse states in the channel of the double-gate device gives rise to a transmission function T(E) that has a characteristic stepped structure as a function of energy. This is a consequence of opening new Landauer channels as the energy is increased. The NEGF simulations predict very different behaviour for T depending on the reservoir geometry ( figure 11) and the presence of atomistic impurities. Figure 12 show the various transmission functions corresponding to different contributing valleys (see figure 5) for no atomistic impurities and for wide and matched reservoir geometries. At low gate bias there is more reflection at the wide reservoir-channel junction. At high gate bias resonances appear in the wide reservoir transmission as noted elsewhere 20 . Figure 13 shows the effect of adding three atomistic impurities to the simulation using configuration S of figure 5. The transmission function loses the sharp step structure as interference scattering from the impurities adds or subtracts to the number of Landauer channels in the system. At low bias the transmission is similar for the wide and matched reservoirs. At high bias, large differences occur between the wide and matched reservoir devices. The complex patterns are associated with tunnelling and resonances as the bare impurity potentials emerge from the potential landscape. Conclusions We have found that the predicted threshold voltage and conductance for a short channel double gate MOSFET are strongly dependent on the impurity configuration. The underlying energyresolved transport shows complex flows typified by meandering streamlines enclosing localised vortex flows. The consequent transmission functions are very random. However, the currentvoltage characteristics are monotonic in all cases due to the strong thermal smoothing that prevents resolution of the mode structure. Some configurations may nevertheless generate macrovortices in the flow. The overall processes predicted for atomistic scattering are in accord with the break-down of the Kohn-Luttinger ansatz for small numbers of impurities. Results of a parallel study suggest that de-coherence processes do not seriously impair these quantum processes provided the coherence lengths are long compared to the 12 nm channel 21 . This situation may not be true for devices with high κ gate stacks for which a plasmon-enhanced SO phonon scattering extends into the channel from the interfaces.
2019-04-19T13:04:39.266Z
2006-04-01T00:00:00.000
{ "year": 2006, "sha1": "5dbd2b53d9126ae981c8589aeec7cf22cb62e87f", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/35/1/021", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "f560618ba9f8120e8147a6915d09f41e2d69f9c0", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
178517928
pes2o/s2orc
v3-fos-license
“Structural Disaster” Long Before Fukushima: A Hidden Accident* This paper attempts to shed fresh light on the structural causes of the Fukushima accident by illuminating the patterns of behavior of the agents involved in the little-known but serious accident that occurred immediately before World War II. Despite the expected incalculable damages caused by the Fukushima nuclear power plant accident, critical information was restricted to government insiders. This state of affairs reminds us of the state of prewar Japanese wartime mobilization in which all information was controlled under the name of supreme governmental authority. This paper argues that we can take the comparison more seriously as far as the patterns of behavior of the agents involved are concerned. The conceptual tool that is employed to that end is the “structural disaster” of the science-technology-society interface. This paper will contextualize the sociological implications of this prewar accident that happened long before the Fukushima accident for all of us who face the post-Fukushima situation with particular focus on the subtle relationship between success and failure. Introduction The Fukushima nuclear power plant accident was extremely shocking, but what is even more shocking in the eyes of the present writer is the devastating failure in transmitting critical information about the accident to the people when the Japanese government faced unexpected and serious events after March 11, 2011. Secrecy toward outsiders has generated this failure; secrecy toward the people who were forced to evacuate from their birthplaces, toward the people who wanted to evacuate their children, toward the people who have been suffering from tremendous opportunity loss such as giving up entering college, and others. It is virtually impossible to enumerate all of the individual instances of suffering and aggregate them in an ordinarily calculable manner. Despite such expected incalculable damages, critical information was restricted to government insiders. This state of affairs seems to be similar to the state of prewar Japanese wartime mobilization in which all information was controlled under the name of supreme governmental authority. One might consider such a comparison with the prewar state to be merely rhetorical. This paper argues that we can take the comparison more seriously as far as the patterns of behavior of the agents involved are concerned. It is true that the prewar Japanese military regime was oriented toward mobilization for war while the postwar regime has been prohibited by the constitution from mobilization for the purpose of war of any kind. In this respect, there is a large discrepancy between the prewar and postwar regimes as to their purpose. 1 However, the surprising but telling similarity of the patterns of behavior of the agents in such discrepant regimes is evident if we look into the details of a hidden accident that took place just before the outbreak of World War II (abbreviated to WWII hereafter). This paper attempts to shed fresh light on the structural causes of the Fukushima accident by illuminating the patterns of behavior of the agents involved in the little-known but serious accident involving naval vessels that occurred immediately before WWII, focusing particularly on the subtle relationship between success and failure in the complex science-technologysociety interface. Similarities and differences will then be contextualized and their sociological implications drawn for all of us who face the post-Fukushima situation. The conceptual tool that is employed here to that end is the "structural disaster" of the science-technology-society interface. The "Structural Disaster" of the Science-Technology-Society Interface The "structural disaster" of the science-technology-society interface is a concept developed to give a sociological account of the repeated occurrences of failures of a similar type (Matsumoto 2002(Matsumoto , pp. 25-7, 2012a. In particular, it is developed to clarify a situation where novel and undesirable events happen but without a single agent to blame, to allocate responsibility for the events, or to prescribe remedies. The reason for denominating this failure as the failure of the science-technology-society interface rather than that of science, or of technology, or of society is worthy of attention to understand the development of my argument. For example, if nuclear physics is completely successful in understanding the process of chain reaction, technology such as nuclear engineering could fail in controlling the reaction as in the case of Chernobyl and its aftermath such as the "Cambrian sheep" incident (Wynne 1996). 2 Or if nuclear engineering is almost completely successful in containing radioactive materials within reactors, social decisionmaking could fail as in the case of the Three Mile Island accident (Perrow 1984(Perrow , 1999Walker 2004). Or if society is completely successful in setting goals for the development of renewable energy technologies, science and/or technology could fail as in the case of Ocean Thermal Energy Conversion (Matsumoto 2005). In a word, the success or failure of science, technology, and society cannot be overlapped automatically (Latour 1996). In particular, there seems to be something missing in-between, which has unique characteristics of its own. The failure of interface is intended to explore this state. What are in-between could be institutional arrangements (Frickel and Moore 2006), organizational routines (Vaughan 1996;Eden 2004), and tacit interpretations of a formal code of ethics, invisible customs, or the networks of interests of different organizations. This paper focuses on, among other things, the structural similarity in terms of the patterns of behavior of heterogeneous agents that come into play in the science-technology-society interface in a specific social condition. If the elements of "structural disaster" can be substantiated based on other independent cases, then we will be in a stronger position to obtain pertinent sociological implications from the Fukushima accident as a "structural disaster" and to extend these implications to potential future extreme events. What follows is an independent substantiation of these elements by examining the almost unknown accident that happened long before the Fukushima accident. The Basic Features of "Structural Disaster" According to Matsumoto (2012a, p. 46), there are five elements that constitute "structural disaster. " 1. Following wrong precedents carries over problems and reproduces them. 2. Complexity of a system under consideration and the interdependence of its units aggravate problems. 3. Invisible norms of informal groups virtually hollow out formal norms. 4. Patching over problems at hand invites another patching over for temporary countermeasures. 5. Secrecy develops across different sectors and blurs the locus of agents responsible for the problems in question. The relevant element running through the above-mentioned prewar accident and the Fukushima accident is secrecy. To be accurate, the development of secrecy in "structural disaster" is decomposed into organizational errors, secrecy, and chain of secrecy to hide such errors. And to capture the nature of secrecy in this connection, the following fact about the Fukushima accident should be kept in mind in approaching the almost unknown accident that occurred long before the Fukushima accident: There have arisen repeated occurrences of similar patterns of behavior that have run through various different instances, which in the end have led to secrecy. It is true that the emergency situation during and after such an extreme event as the Fukushima accident can provide a good reason to expect confusion and delay in transmitting information. But the degree and range of confusion and delay went far beyond those to be expected from an emergency situation alone. For example, the System for Prediction of Environmental Emergency Dose Information (abbreviated to SPEEDI hereafter) was developed with the assistance of more than ten billion yen to make early evacuation of affected people smoother and safer. The first recommendation for evacuation was made by the Japanese government on March 12. The prediction obtained from SPEEDI was made public for the first time on April 26, despite the fact that its prediction had been made immediately after the accident. As a result of this secrecy, residents affected by the accident were advised by the government to evacuate without reliable information at the critical initial phase when they were exposed to a high level dose of radiation (Matsumoto 2012a(Matsumoto , 2012b. All they could do was to decide between trusting the government or not. SPEEDI had been awarded the first nuclear history award by the Atomic Energy Society of Japan in 2009 (Atomic Energy Society of Japan 2009), but its prediction made immediately after the Fukushima accident was never made public when it was needed. Organizational errors have intervened behind this state of affairs. This is the basic point of reference in approaching the almost unknown accident that happened long before the Fukushima accident as "structural disaster" and in securing a broader perspective for obtaining sociological implications from the Fukushima accident and the almost unknown accident that happened long before it. The almost unknown accident mentioned here is the accident of the marine turbine developed by the Imperial Japanese Navy that occurred immediately before the outbreak of WWII. This accident enables us to redefine the complex relationship between success and failure in the sciencetechnology-society interface both in peacetime and wartime. The accident was treated as top secret because of its timing. The suppression of information about this accident means that it has not been seriously considered as an event in the sociology of science and technology up to now. However, the description and analysis of this accident will suggest that the reality of the science-technology-society interface can depart significantly from a simplistic understanding in terms of success or failure. Ships and Tips: The Development Trajectory of the Kanpon Type and Its Pitfalls To understand the reality of this almost unknown accident, it is to the point to introduce two important keywords, "ships" and "tips," as these keywords pinpoint the locus of the complex relationship between success and failure. "Ships" here mean naval vessels of the Imperial Japanese Navy built until immediately before WWII. They symbolize the Navy's success in Japan's development of self-reliant technologies. "Tips" here are the broken pieces of naval turbine blades, which symbolize the completely unexpected failure of technologies. The technology taken up is the Kanpon type turbine, Kanpon being the Technical Headquarters of the Navy. The Kanpon type turbine was developed by the Imperial Japanese Navy around 1920 to substitute entirely self-reliant technologies for imported ones. This naval turbine provides the key to understanding the connection between ships and tips. The reason is that the Kanpon type was the standard turbine for Japanese naval vessels from 1920 to 1945, and behind the broken pieces of its blades laid a serious but little-known failure that occurred immediately before WWII. The core of the connection between ships and tips consists in the background against which the Kanpon type turbine was developed. From the time of the first adoption of the marine turbine in the early twentieth century (1905) after intensive investigations and license contracts, the Imperial Japanese Navy accumulated experience in domestic production of marine turbines. Throughout this process, the Navy carefully monitored the quality of British, American, and various other Western type turbines and evaluated them. 3 To replace imported turbines, the Kanpon type turbine achieved standardization in design, materials, and production method "that is independent of foreign patents" (Shibuya 1970, Vol. 1, Chap. 4, pp. 133-4). The Kanpon type turbine was also expected to achieve cost reduction and flexible usage for a wide range of purposes, which would be made possible by standardization. The first Kanpon type turbine was installed in destroyers built in 1924 (see figure 1). 4 3 The British type originated in Parsons turbine and the American type in Curtis turbine, respectively. The first demonstration of the Parsons turbine at the Naval Review in 1897 caused a sensation in a complicated manner (Legett 2011). Regarding the Curtis turbine, see Somerscale (1992). And the license contracts with the Curtis and the Parsons types were due to expire in June 1923 and August 1928, respectively. Considering this situation, the Navy started to take official steps to develop its own type. On detailed descriptions and analyses of these dual strategies of the Navy outlined here, see Matsumoto (2006, pp. 54-63). 4 All Japanese naval vessels continued to adopt this Kanpon type turbine until 1945. Everyone regarded it as a landmark that showed the beginning of adoption of self-reliant technologies. This is because, as the Japanese Shipbuilding Society wrote in its official history, "there had been no serious trouble with the turbine blades for more than ten years since the early 1920s, and the Navy continued to have strong confidence in their reliability. " (The Japanese Shipbuilding Society 1977, p. 668) What follows is an important counterargument to this account made up of unidirectional development trajectory of technologies and the dichotomous success or failure account of the science-technology-society interface by calling attention to the missing failure linking ships and tips, a pitfall inherent in the trajectory. The pitfall was profoundly related to an unbalanced secrecy within and without the military-industrial-university complex, the key factor leading to "structural disaster" embodied by the almost unknown accident. The military-industrial-university complex hereafter means an institutional structure made up of the governmental sector, particularly the military, the private industrial sector, and the universities-mutually autonomous in their behavior but expected together to contribute to national goals (Matsumoto 2006, p. 50). 5 The Significant Failure Kept Secret In December 1937, a newly built destroyer encountered an unexpected turbine blade breakage. Since the failure involved a standard design engine of the Kanpon turbine, it caused great alarm. However, it is extremely difficult to look into further details of this accident because there is little evidence to prove what is stated by official accounts (Sendō 1952;Itō 1956; War History Unit of the National Defense College of the Defense Agency 1969; Japanese Shipbuilding Society 1977; Institute for the Compilation of Historical Records on the Navy 1981). All the authors/editors of the official accounts were parties connected with the Imperial Japanese Navy (see table 1). It appears that the accident was kept secret because it occurred during wartime mobilization. To confirm this, an examination of government documents from around the time of the accident is in order. The government documents consulted here are the minutes of the Imperial Diet sessions regarding the Navy. The minutes of the 57th Imperial Diet session (held in January 1930) to the 75th Imperial Diet session (held in March 1940) contain no less than 7,000 pages of navy-related discussions (Kanbō Rinji Chōsa Ka 1984). These discussions include ten naval vessel incidents summarized in table 2. It is noteworthy in these discussions that the Fourth Squadron incident of September 1935, one of the most serious incidents in the history of the Imperial Japanese Navy, was made public and discussed in the Imperial Diet sessions within a year (on May 18, 1936). 6 The accident in question occurred on December 29, 1937, and was handed down informally within the Navy and counted as a major incident on par with the Fourth Squadron incident. 7 More than two years after the accident, however, there is no sign in the government documents indicating that it was made public and discussed in the Imperial Diet sessions. Reports on the accident had already been submitted, as will be detailed below, during the period from March to November 1938 (the final report was submitted on November 2). Nevertheless, the Imperial Diet heard nothing about the accident or any details of the measures taken to deal with it. The accident was so serious that it would have influenced the decision on whether to go to war with the U.S. and Britain. The Fourth Squadron incident was also serious enough to influence the decision after the London naval disarmament treaty was concluded in 1930. 8 But it was made public and discussed in the Imperial Diet sessions. In this respect, there is a marked difference between the handling of the 6 The Tomozuru incident of March 11, 1934 was the first major one for the Imperial Japanese Navy. Only one-and-a-half yearsafter this, a more serious incident occurred on September 26, 1935, which was the Fourth Squadron incident. 7 Based on interviews by the present writer with Dr. Seikan Ishigai on September 4, 1987 and June 2, 1993, and with Dr. Yasuo Takeda on September 25, 1996 andMarch 19, 1997. 8 The purpose of this treaty was to restrict the total displacement of all types of auxiliary warships other than battleships and battle cruisers. This London treaty obliged the Imperial Japanese Navy to produce a new idea in hull design enabling heavy weapons to be installed within a small hull, which, however, proved to be achieved at the expense of the strength and stability of the hull, as the incident dramatically showed. . Although his answer gave no information regarding the damage to human resources (all members of the crew confined within the bows of the destroyers died), it accurately stated the facts of the incident and the material damage incurred, which amounted to 2.8 million yen in total. Even the damage due to the collision between cruisers about five years earlier in table 2 was only 180 thousand yen. The answer from a naval official clearly attested that the Fourth Squadron incident was so extraordinarily serious as to oblige him to disclose this fact to the public (Kanbō Rinji Chōsa Ka 1984, Vol. 1, Part 2, p. 831). It should be noted here that remedial measures for the problem of the turbines of all naval vessels disclosed by the accident in question were expected to cost 40 million yen (Shibuya n.d.). Nevertheless, no detailed open report of the accident was presented at the Imperial Diet. This fact strongly indicates that the accident was top secret information and not allowed to go beyond the Imperial Japanese Navy. What, then, were the facts? This question will be answered based on documents owned by Ryūtarō Shibuya who was the engineering vice admiral of the Navy and was responsible for the turbine design of the naval vessels at the time (these documents will be called the Shibuya archives hereafter). 9 The Hidden Accident and the Outbreak of War with the U.S. and Britain: How Did Japan Deal with "Structural Disaster" in the Past? According to the materials of the Shibuya archives, a special examination committee was established in January 1938 to investigate the hidden accident (The Minister of the Navy's secretariat Military Ignoring duplication of members belonging to different subcommittees and arranging the net members by section, we obtained the following result (see table 3). All members of the committees are insiders of a single sector, the military sector. In accordance with the voluminous reports of 66 committee meetings held over a period of ten months, the improvement of 61 naval vessels' turbines was indicated as remedial measures (Rinkicho Report, Top Secret No. 1, 1938through Rinkicho Report, Top Secret No. 27, 1938. 10 However, the blade breakage in the accident was significantly different from those that occurred in the past. In impulse turbines, for instance, blades in most cases were broken at the base where they were fixed to the turbine rotor. In contrast, one of the salient features of this accident was that the tip of the blade was broken off. The broken off part amounted to one third of the total length of the blade. Figure 2 is a photograph showing the locus of the breakage (Rinkicho Report, Top Secret No. 1, 1938). . 1-2). The Imperial Japanese Navy had thus had many problems with turbine blades for many years and accumulated experience in handling them. Accordingly, it is unsurprising that the special examination committee took the failure as a mere routine problem from the outset based on such a long and copious experience. 11 Calculated based on the Rinkicho Report, Top Secret No. 35, 1938, Appended Sheets. Turning our attention to wartime mobilization of the day, the Japanese government enacted the Wartime Mobilization Law on April 1, 1938 for the purpose of "controlling and organizing human and material resources most efficiently......in case of war" (Clause 1). Naval vessels came first in the specification of the law as "resources for wholesale mobilization" (clause 2). 12 Against this background, the naval engine failure caused by small tip fragments of the main standard engine was a very delicate matter for anyone to raise. And yet, for the reason mentioned above, the cause of this failure seemed to be significantly different from any previous routine problems. The complete test for detecting the cause of this peculiar accident required the 12 Ishikawa (1982, p. 412). The author was in charge of drafting the national mobilization plan at the Cabinet Planning Board (Kikaku In) in the prewar period. Navy to construct from scratch a full-scale experimental apparatus designed for the load test of the standard Kanpon turbine, which was only completed in December 1941, the month the war with the U.S. and Britain broke out. As a result, the schedule for identifying the cause, which was originally expected to be completed in November 1940, was extended to mid-1943(Kaigun Kansei Honbu Dai 5 Bu 1943. Thus, it is probable that all of Japan's naval vessels had turbines which were imperfect for some unknown reason when the country went to war with the U.S. and Britain in 1941. What was the true cause? 13 The true cause was binodal vibration. Previous efforts to avoid turbine vibration had been confined to one-node vibration at full speed since multiple-node vibration below full speed had been assumed to be hardly serious and unworthy of attention based on rule of thumb (Sezawa 1932;Pigott 1937Pigott , 1940. The final discovery of the true cause of the hidden accident drastically changed the situation. It revealed that marine turbines were susceptible to a serious vibration problem below full speed. It was in April 1943 that this true cause was eventually identified by the final report of the special examination committee-almost one and a half years after war broke out (Kaigun Kansei Honbu Dai 5 Bu 1943; see figure 3). Strictly in terms of the technology involved in the accident without hindsight, therefore, the evidence suggests that the Japanese government went to war in haste in 1941, notwithstanding the fact that it had highly intricate and serious problems with the main engines of all its naval vessels. And that fact was kept secret by the military sector from other sectors in the military-industrial-university complex, not to speak of the general public. The rarity of breakdowns of naval vessels due to turbine troubles during the war is a completely different matter, a kind of hindsight. Thus, the hidden accident strongly suggests that practical results alone (for example, rarity of breakdowns of naval vessels due to turbine troubles) during wartime, possibly in peacetime as well, do not prove the essential soundness of the development trajectory of technology, and that of national decision-making along the trajectory. The Sociological Implications for the Fukushima Accident: Beyond Success or Failure The above description and analysis of an independent case, a hidden accident that happened long before the Fukushima accident, provides an important guideline for understanding the Fukushima accident as a "structural disaster" beyond the simplistic dichotomy of success or failure. For one thing, critical information on significant failures in an emergency situation was made secret to outsiders of the governmental sector in both the prewar accident and the Fukushima accident. Secondly, both accidents occurred after a long history of successful technological development: The prewar hidden accident that happened long after a successful operation of the naval turbine in question, Kanpon type, since the 1920s reminds us of its structural similarity to the Fukushima accident that happened after a long successful operation of nuclear reactors closely associated with the myth of safety. 14 Most importantly, the sociological implications of this prewar hidden accident pertain to the social context of organizational errors. The social context of the prewar accident is the wartime mobilization of science and technology, which was authorized by the Wartime Mobilization Law of 1938 and the Research Mobilization Ordinance of 1939. This formal legal foundation gave rise to the structural integration of the military-industrialuniversity complex under the control of the military sector. The military sector controlled the overall mobilization, in which the industrial sector and the universities had to obey orders given by the military. This was also associated with an extremely secretive attitude of the military toward outsiders. According to Hidetsugu Yagi who invented a crucial component technology of radars in the form of the pioneering Yagi antenna and in 1944 became the president of the Board of Technology, the central governmental authority specially set up for the wartime mobilization of science and technology, the military "treated civilian scientists as if they were foreigners" 14 As to the little-known prewar accident, the recognition of binodal turbine blade vibration as the true cause was beyond the knowledge of most turbine designers of the day. This type of problem is supposed to have been unrecognized until the postwar period. In the postwar period, avoiding turbine blade vibration caused by various resonances still provided one of the most critical topics for research on turbine design (Trumpler Jr. and Owens 1955;Andrews and Duncan 1956;Visser 1960). In fact, a similar failure occurred even in 1969 in the QE2's turbine (Report on QE2 turbines 1969). (Report on Scientific Intelligence Survey in Japan 1945). 15 Thus, cooperation, not to speak of coordination, with the military sector was very limited even among the central governmental authorities specially set up to integrate every effort for the wartime mobilization of science and technology, and the military-industrial-university complex began to lose its overall integration. What is important here is the fact that this functional disintegration of the network of relationships linking the military and the other sectors was taking place just at the time the strong structural integration of the complex was formally being reinforced by the Wartime Mobilization Law of 1938 and the Research Mobilization Ordinance in the next year. This coexistence of structural integration and functional disintegration during wartime mobilization provides a suitable background for redefining success or failure not only in prewar Japan's context but in the current context of the Fukushima accident. If the Fukushima accident is "structural disaster, " it could have some characteristics similar to the coexistence of structural integration and functional disintegration. For example, functional disintegration of the network of relationships linking the government, TEPCO officials, and the reactor designers of heavy electric equipment manufacturers might be taking place just at a time when the strong structural integration of the government-industrial-university complex was formally reinforced by the seemingly well-organized ordinances and laws revolving around the "double-check" system within a single ministry in the past and that between two ministries now, between METI (Ministry of Economy, Trade and Industry) and the Ministry of the Environment, ministry-bounded in either case. As long as this kind of functional disintegration of the sciencetechnology-society interface continues to exist and to operate behind the façade of structural integration, this state of affairs can lead to similar serious failures in quite a different and larger-scale social context. The possibility of functional disintegration through structural integration coupled with secrecy and the suppression of negative information under the name of communication activities in the current context could be one of the important symptoms of "structural disaster" embodied by the Fukushima accident. For example, while various communication activities to facilitate links between science, technology, and society had been carried out with public funds as represented in Café Scientifique before the Fukushima accident, it turns out that there had been only one Café Scientifique on anything nuclear (held on July 24, 2010) out of 253 carried out in the Tohoku district including Fukushima prefecture. And yet the topic taken up then had nothing to do with any kind of risk from nuclear power plants, not to speak of extreme events. 16 This implies various activities that are supposed to facilitate wellbalanced links between science, technology, and society in reality did nothing in advance about the communication of the negative aspect of nuclear power plants and, therefore, played no role in early warning against extreme events such as the Fukushima accident. If the "structural disaster" thus embedded in the social context of the Fukushima accident continues to exist in a path-dependent manner, the science-technology-society interface surrounding the Fukushima accident would probably be unable to tolerate another impact that could be given by serious and unexpected events such as a second huge earthquake and tsunami and/or the difficulty of decontamination within some of the reactors in question and their abrupt uncontrollability. 17 Therefore the most important lesson to learn from the Fukushima accident as "structural disaster" in light of the hidden one that happened much earlier immediately before the outbreak of WWII is how to avoid the worst of this kind. That is to say, the seemingly structurally robust but functionally disintegrated science-technology-society interface due to secrecy should be changed. It should be changed by the will of the people who are suffering from the Fukushima accident and a significant structural remedy should be instituted beyond countermeasures that only temporarily patch over individual troubles coming to light at that moment. Conclusion: Prospects for the Future From the viewpoint of "structural disaster, " there are two different kinds of similarities between the prewar hidden accident and the Fukushima accident: one relating to the timing of secrecy, the other to the social context 16 What is mentioned here is confirmed on November 18 through the following portal website on Café Scientifique in Japan. http://cafesci-portal.seesaa.net/ 17 Although the question of high-level radioactive waste disposal has not been discussed in Japan in association with the Fukushima accident up to now, the disposal question should be added to the list of "serious and unexpected events" (Matsumoto 2010;Macfarlane 2012). of organizational errors. First, regarding the timing of secrecy in relation to technological trajectory, both accidents took place after dozens of years of successful operation of domestically produced technologies. This situation made it extremely difficult for the agents involved in the two accidents to make the accidents public even at a critical moment of decision-making because disclosing the accidents should have drastically destroyed the trust of the agents in the public sphere. In that particular sense, secrecy in both accidents could be the result of the need for face-saving of the agents who went through "self-reliant failure" for the first time. Second, there is a similarity between the two accidents in terms of the social context of organizational errors. That is to say, the coexistence of structural integration and functional disintegration observed in the prewar accident could similarly reside in the Fukushima accident, together with asymmetrical relationships between the governmental sector and other sectors. In this connection, nuclear-industrial-university complex in the current context could be a "dysfunctional" equivalent to military-industrialuniversity complex in the prewar period. Of course, there are differences between the two accidents. Among other things, the difference in the way organizational errors came to be detected and corrected is noteworthy. In the prewar accident, the conclusion once reached based on the voluminous reports of the special examination committee and yet authorized by the organization in question was dynamically changed by carefully observed facts of the locus of sheered tip regardless of past experience accumulated in the organization. Such a dynamic reconsideration of alternative possibilities that must have upset the face-saving procedure within a specific organization triggered the restart of the examination leading to a drastically different conclusion. In contrast, there has been no sign up until now of the working of this kind of dynamic correction of organizational errors in the Fukushima accident. Looking at inside stories of TEPCO, former NISA, newly set up NRA (Nuclear Regulation Authority), and other governmental bodies that have been disclosed one after another, one might rather well suspect the working of mutual "cover-ups" within and/or between those organizations in question, though the possibility of the dynamic correction of organizational errors might still be left open. This difference is noteworthy because, even with the working of such a dynamic correction of organizational errors and reconsideration of alternative possibilities, the timing of the realization of the true cause of the prewar accident was too late for Japan to check the soundness of national decision-making before going to war in 1941. In sum, putting together the similarity between the prewar accident and the Fukushima accident as "structural disaster" and the difference as to whether the dynamic correction of organizational errors and the reconsideration of alternative possibilities work, it is crucial for us in the current context to be fully aware of the risk of being too late in two senses. First, we should not be too late in bringing the minimum essentials of the still ongoing accident to the public sphere through breaking secrecy and the chain of secrecy. Secondly, we should not be too late in correcting organizational errors because of the face-saving of the organizations in question. These two points are crucial for the Fukushima accident as "structural disaster, " because delayed timing could mean the start of something devastating, uncontrollable, and irreversible to all of us.
2018-12-09T10:01:26.670Z
2013-12-01T00:00:00.000
{ "year": 2013, "sha1": "f92cf10b4484a14fddca59d4489ceebffda8ff29", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.21588/dns.2013.42.2.002", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "f92cf10b4484a14fddca59d4489ceebffda8ff29", "s2fieldsofstudy": [ "History", "Political Science" ], "extfieldsofstudy": [] }
220042953
pes2o/s2orc
v3-fos-license
Reversal of glucocorticoid resistance in Acute Lymphoblastic Leukemia cells by miR-145 Objective To analyze the expression levels of miR-145 in ALL children and their effects on the prognosis of ALL and to explore the mechanism of miR-145 in reversing the resistance of ALL cells to glucocorticoids. Methods A GEO database dataset was used to analyze the expression levels of miR-145 in ALL children. The association between miR-145 and childhood prognosis was analyzed by the TARGET database data. The expression levels of miR-145 in the glucocorticoid-resistant ALL cell line CEM-C1 were increased by lipofectamine 2000-mediated transfection. Cell proliferation inhibition experiments were performed to detect the effect of miR-145 on the response of CEM-C1 cell line to glucocorticoids. The expression levels of the apoptotic, autophagic and drug resistance-associated genes and proteins were detected by qPCR and western blot analysis. Results The expression levels of miR-145 were decreased in ALL patients (P < 0.001) and the prognosis of ALL in children with high miR-145 expression was significantly improved (P < 0.001). Increased miR-145 expression can improve the sensitivity of CEM-C1 cells to glucocorticoids. The expression levels of the proapoptotic and the anti-apoptotic genes Bax and Bcl-2 were increased and decreased, respectively, whereas the expression levels of the autophagicgenes Beclin 1 and LC were increased. In addition, the expression levels of the drug resistance gene MDR1 were decreased. Conclusion The expression levels of miR-145 in ALL children were decreased and they were associated with disease prognosis. The data indicated that miR-145 can reverse cell resistance by regulating apoptosis of CEM-C1 cells and autophagy. INTRODUCTION Acute lymphoblastic leukemia (ALL) is the most common malignant tumor in children and is associated with malignant proliferation of immature T or B lymphocytes (Pui et al., 2015). Glucocorticoids (GCs) are widely used in the treatment of ALL. These drugs induce apoptosis of the lymphoid progenitor cells. However, repeated use of GCs leads to drug resistance of tumor cells, resulting in treatment failure or recurrence. It has been reported that 20% of ALL children are resistant to GCs and that the proportion of GC resistance in recurrent ALL children can reach 70% (Xie et al., 2019). The low reactivity to the prednisone-induced test is also one of the main indicators of the increase in the risk of relapse and treatment failure of childhood ALL (Hematology Section, Chinese Academy of Pediatrics, Editorial Committee, 2014). It has been shown that GCs play a critical role in the treatment of ALL. Therefore, the identification of the mechanism of GC resistance and the development of new treatment strategies can fully unlock the therapeutic potential of GC and significantly improve the prognosis of ALL. MicroRNAs (MiRNAs) are small non-coding single-stranded RNAs containing approximately 22 nucleotides that display regulatory functions against mRNAs after transcription. They also participate in the regulation of GC sensitivity through a variety of mechanisms and are involved notably in the regulation of the intracellular expression of the GC receptors. The sensitivity of tumors to GCs may be affected by miRNAs that can be used as biomarkers or can provide potential strategies for overcoming drug resistance (Wang et al., 2017). A previous study demonstrated that miR-145 exhibited low expression levels in adult T-ALL and that it was significantly associated with the deterioration of the patient health condition . Therefore, miR-145 may become a prognostic marker and potential therapeutic target for ALL patients . The present study analyzed the expression levels of miR-145 in childhood ALL and explored the correlation between miR-145 and glucocorticoid resistance in ALL cells. The antitumor effect of miR-145 was examined in the glucocorticoid-resistant ALL cell line CEM. The mechanism by which C1 affected the sensitivity to glucocorticoids was also explored. Dataset ALL samples from children were collected. The miRNA expression profile dataset GSE56489 was downloaded from the NCBI's GEO database (https://www.ncbi.nlm.nih. gov/geo/). This dataset contained 43 bone marrow samples from ALL children and 14 age-matched healthy control samples, including 21 males and 22 females, and the average age is 6.8 ± 4.5 years. The data were processed using the GPL14132 dataset and the Homo sapiens miRBase 15.0 annotation (Duyu et al., 2014). In addition, the miRNA expression profiles of childhood ALL were downloaded from the TARGET database (Therapeutically Applicable Research To Generate Effective Treatments, https://ocg.cancer. gov/programs/target) managed by the NCI's Office of Cancer Genomics and Cancer Therapy Evaluation Program. The datasets and clinical data were divided into high and low expression groups according to the median gene expression level. The clinical data of the children were combined to further analyze the miRNAs associated with the prognosis of the children. Cell culture and transfection The CEM-C1 cell line is a human acute T-lymphocytic leukemia (T-ALL) cell line resistant to dexamethasone (DEX). This cell line was donated by Professor Ma Zhigui (Children's Hematology and Oncology Department of West China Second Hospital of the Sichuan University) (Yan et al., 2014). The cells were cultured in an RPMI-1640 medium containing 100 mg/ml penicillin G, 100 mg/ml streptomycin and 10% fetal bovine serum in an incubator at 37 C, in the presence of 5% CO 2 . When the cells were confluent to 85% or more, the cells were prepared for subculture. Following washing, CEM-C1 cells were grown to the log phase and seeded in a 6-well plate in the presence of serum-free and antibiotic-free culture medium. miR-145 mimic and miR-145 mimic NC control were transfected with Lipofectamine 2000 transfection reagent according to the manufacturer's instructions. The miR-145 inhibitor and the miR NC control sequences were transfected into the CEM-C1 cells. They were divided into the following groups: miR-145 mimic: MM, miR-145 mimic NC: MMN, miR-145 inhibitor: MI, miR-145 inhibitor NC: MIN. Four groups were used for backup experiments at 48 h following transfection. A transfected fluorescent miR-145 mimic was used to observe cellular morphology by fluorescence microscopy and to rapidly detect cell transfection efficiency. qRT-PCR detection of miR-145 and associated-gene expression in each group of cells The aforementioned groups of cells were collected and total RNA was extracted by the total RNA extraction kit and reverse-transcribed into cDNA. The target genes, miR-145 and the internal reference gene GAPDH, RNU6B were amplified by fluorescent quantitative PCR. The sequences of the primers of each gene are shown in Table 1. The cDNA samples were pre-denatured at 95 C for 60 s, denatured at 95 C for 15 s, annealed at 60 C for 15 s and extended at 72 C for 45 s. A total of 40 cycles were used for fluorescent quantitative PCR reaction conditions. The data were analyzed by the formula: 2 −ΔΔCt , indicating the relative expression levels of the target gene mRNA. CCK8 detects miR-145 affects CEM-C1 cell responsiveness to dexamethasone Following transfection, CEM-C1 cells were incubated for 48 h and the cell density was adjusted to 1 × 10 5 cells per well. The cells were seeded in 96-well plates, and 20, 40, 80, 160 and 320 mg/ml dexamethasone (DEX) were added to each group, respectively. A total of 3 replicates were set up for each group of cells and the experiment was repeated 3 times. Following 48 h of incubation, 10 ml CCK8 reagent was added to each well. The samples were incubated in a CO 2 incubator for 4 h and the OD value of each well was measured by the absorbance (A) reading at 450 nm using a multifunctional microplate reader. Calculated according to the formula: cell proliferation inhibition rate = (OD control −OD experimental )/(OD control −OD blank ) ×100%; IC 50 = Ig-1 {Xm-I (∑p-0.5)}; Resistance Index (RI) = IC 50 control /IC 50 experimental . Flow cytometry for miR-145 detection of CEM-C1 cell apoptosis CEM-C1 cells were collected 48 h following transfection, washed with pre-cooled PBS, and mixed with 500 ml binding Buffer. A total of 100 ml of cell suspension was used for sub-assembly. Each sample was mixed with 5 ml Annexin-V FI/TC and 5 ml PI staining solution and subsequently incubated for 15 min at room temperature in the dark. A total of 400 ml of binding buffer was added and 1 × 10 4 cells were analyzed by flow cytometry. Annexin-V FI/TC positive cells were classified as progressive apoptotic cells, while FI/TC negative cells were classified as viable cells. Acridine orange staining indicates the effects of miR-145 on the induction of CEM-C1 cell apoptosis CEM-C1 cells were collected 48 h following transfection, washed twice with PBS and added to a final concentration of 100 mg/ml acridine orange stain. The cells were stained for 30 min in the dark and observed under a fluorescence microscope. Western blot analysis of apoptotic and autophagic proteins in CEM-C1 cells CEM-C1 cells were collected at 48 h following transfection and total cell proteins were extracted according to the instructions of the whole protein extraction kit. The protein concentration was measured by the BCA method and the same amount of protein was used for each sample. The protein samples were boiled for 5 min in 5 × SDS loading buffer. Following SDS-PAGE electrophoresis, the proteins were transferred to PVDF membranes and blocked with 5% skimmed milk powder for 2 h. The samples were rinsed thoroughly with TBST (10 min, 3 times). Bax, Bcl-2, LC3 I/II, Beclin-1 and MDR1 polyclonal antibodies were diluted at a 1:1,000 dilution ratio and the internal reference protein GAPDH was diluted at a 1:10,000 ratio. The samples were incubated overnight at 4 C, Table 1 Primer sequences for related mRNAs. washed thoroughly with TBST and goat anti-rabbit secondary antibody was added at a 1:2,000 dilution ratio. The samples were further incubated at room temperature for 1 h and washed thoroughly with TBST (10 min, 3 times). Finally, ECL luminescence reagents were added for protein detection. The images were analyzed by the Gel-Pro analyzer software and the relative expression levels of the target proteins were expressed as the relative ratio of the expression of each protein band to the relative expression of the GAPDH protein band. Statistical analysis Statistical analysis of the data was performed using the GraphPad Prism 7.0 software. The experimental data were expressed as mean±standard deviation (mean ± SD). The data of multiple groups were compared by one-way analysis of variance. The data of the two groups were compared by the t test. The difference was statistically significant at P < 0.05. Association between miR-145 and childhood ALL Expression levels of miR-145 in ALL samples from children The expression levels of miR-145 in the bone marrow samples of 43 children with ALL and in the corresponding samples of 14 healthy children were determined. The expression levels of miR-145 in ALL children and in age-matched healthy control samples were 7.09 ± 0.19 and 8.67 ± 0.27, respectively. The expression levels of miR-145 in children with ALL were significantly lower than those of the healthy subjects (t = 4.26, P < 0.0001) (Fig. 1). Association between miR-145 levels and prognosis of children with ALL The expression of miR-145 in bone marrow samples of 179 children with ALL was determined using the TARGET database and the log-rank survival analysis. These results were combined with clinical data showing that the prognosis of children with ALL in the high expression miR-145 group was significantly higher than that of the low expression group (P < 0.001) (Fig. 2). Expression levels of miR-145 in CEM-C1 cells Transfection of miR-145 as determined by fluorescence microscopy Following transfection of CEM-C1 cells with fluorescent miR-145 mimics, they were observed using fluorescence microscopy. The transfected cells appeared red under the fluorescence microscope, as shown in Fig. 3A. The cell morphology was also observed using light microscopy (Fig. 3B). The efficiency of transfection was estimated to approximately 70% and a small amount of cells were not viable due to the toxicity of the transfection reagent. qPCR detection of transfection efficiency of each group of cells The same method was used to transfect miR-145 mimic, miR-145 mimic NC negative control, miR-145 inhibitor and miR-145 inhibitor NC negative control sequences into CEM-C1 cells for 48 h. Subsequently, miR-145 expression was detected in the four groups of cells by qPCR. The expression levels in the medium were further investigated in order to assess successful transfection. miR-145 exhibited significantly higher expression in the MM group (P < 0.01), whereas its expression levels were significantly decreased in the MI group (P < 0.001) (Fig. 4). miR-145 improves sensitivity of CEM-C1 cells to Dexamethasone The results indicated that DEX exhibited a significant inhibitory effect on the cells of each group (Fig. 5A). The IC 50 values of the MIN, MI, MMN and MM groups were (69.94 ± 9.33), (142.60 ± 6.74), (75.24 ± 7.86), ( 42.66 ± 5.26) mg/ml, compared with the IC 50 of each group. Significant differences were noted using t test analysis was statistically significant (Fig. 5B); The resistance index (RI) was 3.34 and was estimated by the IC 50 measurement of the MM and the MI groups. Effects of miR-145 on the induction of apoptosis of CEM-C1 cells Flow cytometry for detection of the apoptotic effects of miR-145 in each group Following 48 h of transfection, the apoptotic rate of the MIN group (7.80 ± 0.70%) was lower than the apoptotic rate of the MI group (6.65 ± 0.40%). The apoptotic rate of the cells in the MMN group was 7.73 ± 1.06%, whereas the apoptotic rate of the cells of the MM group was 15.76 ± 0.17%. The apoptotic rate of the drug-resistant cell lines with increased expression of miR-145 was significantly increased (t = 7.45, P < 0.01) (Fig. 6). Effects of miR-145 on the expression levels of the apoptotic genes Bax, Bcl-2, MDR1 and of their corresponding proteins Following qPCR analysis, the expression levels of the pro-apoptotic gene Bax in the MM group were higher than those of the MI group (P < 0.01). The expression levels of the anti-apoptotic gene Bcl-2 were decreased in the MM group (P < 0.01). The expression levels of the drug resistance gene MDR1 were also decreased in the MM group (P < 0.01) (Fig. 7A). The results of the western blot analysis were consistent with the qPCR results and the expression levels of the Bax protein were higher than those noted in the MM group Effects of miR-145 on the induction of autophagy in CEM-C1 cells Acridine orange detection of miR-145-mediated autophagy in each group of cells Following transfection the cells were cultured for 48 h and the effects of miR-145 on the induction of autophagy of the drug-resistant cells were detected by acridine orange staining. The data indicated a higher number of red-spotted acidic vesicles (autophagosomes) in the cytoplasm of the MM group. Moreover, the data indicated that increased expression levels of miR-145 could promote autophagy in drug-resistant cells (Fig. 8). Effects of miR-145 on the expression of the drug resistance gene MDR1, of the autophagic genes LC and Beclin 1 and of their corresponding proteins in each group When autophagy is induced, cytosolic LC-I is enzymatically converted to LC-II, and this process can be evaluated based on the ratio of LC-I/II. qPCR analysis demonstrated that the expression levels of the autophagy-associated genes LC and Beclin 1 in the MM group were higher than those in the MI group, while the ratio of LC-I/II in the MM group was reduced (Fig. 9A). All these differences in the results were statistically significant (Fig. 9A). The western blot data were also consistent with the qPCR results. In the MM group, the LC-I/II ratio was decreased (P < 0.05) and the levels of the autophagic protein Beclin 1 were increased (P < 0.01). In addition, cyclin B1 expression was also significantly increased (P < 0.001) (Figs. 9B and 9C). The autophagy inhibitor 3-Methyladenine (3-MA) was added to the MM and MMN groups in order to confirm whether autophagy-related genes interact with miR-145 to improve the resistance of CEM-C1 cells to glucocorticoids. The results indicated that following addition of the inhibitor, the ratio of the autophagic protein LC-I/II increased significantly, whereas the expression levels of Beclin 1 were decreased and the expression levels of the drug resistance protein MDR1 were increased following addition of the inhibitor (Fig. 10). DISCUSSION Glucocorticoids (GCs) can induce apoptosis of ALL cells and are the main chemotherapeutic drugs used in children with ALL. The resistance of patients with ALL to GCs is also a major challenge in clinical treatment (Pui et al., 2012). MicroRNAs (miRNAs) are a group of small non-coding RNAs that are 22 nucleotides in length. Accumulating evidence has suggested that miRNAs play an important role in several biological processes and that their main role is to bind to the 3′-UTR region of the target sequence, resulting in inhibition or degradation of mRNA molecules (Iorio & Croce, 2017). miR-145 is located on chromosome 5 (q32-33) and is a typical fragile site of the human genome. The present study demonstrated that in a variety of tumors, such as glioma and ovarian, lung and esophageal cancers, the expression levels of miR-145 were lower than those of the adjacent or normal tissues, indicating that miR-145 plays a role in tumor suppression Hua et al., 2019;Li et al., 2019;Zheng et al., 2019). Moreover, miR-145 may be involved in ALL resistance to GCs (Bhadri, Trahair & Lock, 2012). However, no specific mechanism of action has been discovered to date. In the present study, we found that the expression levels of miR-145 in childhood acute lymphoblastic leukemia were lower than those noted in healthy controls, suggesting that miR-145 played a regulatory role in the development of childhood ALL (Fig. 1). In addition, it was found that the survival time of children with high expression of miR-145 was significantly higher than that of the low expression group, further indicating that miR-145 was associated with the prognosis of children with ALL (Fig. 2). The adaptation of the chemotherapeutic dose and schedule of GC administration according to the tumor resistance has become a new focus for the diagnosis and treatment of ALL children. Therefore, it is possible to achieve a precise treatment plan for children with ALL. At the cellular level, the present study demonstrated increased expression levels of miR-145 in the GCs-resistant ALL cell line CEM-C1 by transfecting miR-145 mimic and miR-145 inhibitor sequences to the cells. A specific cell line with increased expression of miR-145 was identified. The proportion of cells undergoing apoptosis and autophagy was significantly higher than that of the low expression group (Figs. 5 and 6), which confirmed the aforementioned hypothesis. Apoptosis is a highly ordered and active death process regulated by specific genes, which is considered one of the main ways to regulate body homeostasis. The Bcl-2 protein family plays a "main switch" role in the process of apoptosis by regulating the opening and closing of mitochondrial PT pores (Agrawal et al., 2011). Bcl-2 is an anti-apoptotic gene initially identified in follicular lymphoma, which inhibits apoptosis via a calcium-dependent specific protein phosphorylation mechanism (Preisler & Gopal, 1994). This function is considered one of the main mechanisms responsible for drug resistance (Preisler & Gopal, 1994). Bax is the most widely studied pro-apoptotic protein in the Bcl-2 family. It is localized in the cytoplasm and promotes the opening of PT pores, releasing cytochrome C, activating caspase-9, and activating the mitochondrial apoptotic pathways (Preisler & Gopal, 1994). In addition, it antagonizes the Bcl-2 anti-apoptotic effect and accelerates cell death (Perkins et al., 2000). Autophagy is a physiological process in which cells self-degrade. This process has an important function in maintaining internal cellular stability. Following induction of autophagy in the cells, a protective effect is conferred from cell hazards, such as harsh environment and type II programed death or autophagic death, which is different than apoptosis (Jia et al., 2009;Cheallaigh et al., 2011;Park & Cuervo, 2013). Among these associated genes, Beclin 1 is involved in the initiation of autophagy. This protein is located in the autophagosome membrane. Its expression is positively associated with the levels of autophagy and plays an important role in its induction. The increase in the expression levels of these markers is an important and reliable indicator of autophagy (Luo & Rubinsztein, 2010;Russell et al., 2013). LC3 is a homolog of the autophagy-related gene (ATG8) gene in mammalian cells. It is targeted to the autophagosome membrane and participates in the induction of autophagy. LC-I is localized in autophagosomes and is cleaved into LC-II by lysosomes. Therefore, the decrease in the LC-I/II ratio is the hallmark of autophagy (Mizushima, Yoshimori & Ohsumi, 2003). In the present study, qPCR and western blot analyses indicated that miR-145 could promote apoptosis by upregulating the levels of the anti-apoptotic protein Bax and by downregulating the levels of the pro-apoptotic protein Bcl-2. Moreover, miR-145 promoted cell self-regulation by controlling the levels of the autophagic proteins LC and Beclin 1. The expression levels of the resistance protein MDR1 were also decreased following the increase of miR-145, indicating that miR-145 could improve the resistance of the CEM-C1 cell line to . The drug resistance mechanism of the ALL cell line CEM-C1 to GC treatment was not fully detailed and further experiments are required to explore the effects of miR-145 on other signaling pathways and key signaling proteins. Additional in-depth mechanistic analysis is required by animal studies and clinical trials. CONCLUSIONS In summary, the present study indicated that miR-145 expression was downregulated in ALL children and that it was associated with disease prognosis. miR-145 can increase the sensitivity of CEM-C1 cells to GCs by promoting the induction of autophagy and apoptosis. These data may provide a novel avenues for the clinical diagnosis and treatment of children with ALL.
2020-06-18T09:08:19.907Z
2020-06-16T00:00:00.000
{ "year": 2020, "sha1": "0e03468cf78b1612c716c055fba5f95426dd951e", "oa_license": "CCBY", "oa_url": "https://peerj.com/articles/9337.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a932fa139c7287aecd3f856e98dac3d06c082bcb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
207915477
pes2o/s2orc
v3-fos-license
Pheochromocytoma Crisis Pheochromocytomas are rare tumours of the adrenal gland that secrete catecholamines. The classical presentation of these tumours consists of a clinical triad of headaches, pal‐ pitations and diaphoresis. This clinical presentation should not be confused with the potentially fatal presentation of pheochromocytoma crisis, which may include severe haemodynamic instability and collapse, multi‐organ failure, hyperthermia and encepha‐ lopathy. When patients present in profound shock, supportive care and treatment are initiated. Patients presenting with pheochromocytoma crisis have an underlying adrenal tumour, but the clinical manifestations of this life‐threatening condition can mimic other entities. Once diagnosis is made, previous anecdotal evidence has shown that pheochro‐ mocytoma crisis is a surgical emergency. However, retrospective study of a larger sample of patients presenting with pheochromocytoma crisis suggests that medical management in the acute setting is appropriate and safe. The ultimate treatment is indeed surgical; however, there is no clear recommendation for the acute management of pheochro‐ mocytoma crisis. This chapter will focus on the medical and surgical management of potentially life‐threatening pheochromocytoma crisis. An in‐depth review of the clinical presentation, pathophysiology, causes and treatments of pheochromocytoma crisis will be provided, including the controversial areas surrounding decision‐making and timing for adrenalectomy. 1 . Activation of β ‐1‐adrenergic recep‐ tors also causes increased secretion of renin, which increases the mean arterial pressure. Epinephrine primarily acts on β ‐1‐ and β ‐2‐adrenergic receptors. Activation of β ‐2‐adrenergic receptors leads to vasodilatation of arteries as well as increased secretion of norepinephrine by the sympathetic ganglia. patients, higher incidence of tumour haemorrhage or rupture in the emergency surgery patients, higher incidence and longer duration of preoperative α ‐blockade in the elective/urgent surgery patients, higher rate of laparoscopy in the elective/urgent surgery patients and increased risks of both intra‐opera‐ tive and post‐operative complications in the emergency surgery patients. Introduction Pheochromocytomas are rare tumours of the sympathetic nervous system that arise from the chromaffin cells of the adrenal medulla. These tumours secrete catecholamines either intermittently or continuously. Pheochromocytomas are generally unilateral, in 90% of cases, whereas bilateral disease is found more commonly in the paediatric population and associated with genetic syndromes. Right-sided adrenal tumours are more common and have a higher preponderance to cause paroxysmal hypertension compared to left-sided tumours that are generally associated with persistent hypertension. These tumours have an estimated incidence of 2-8 cases per million per year [1] and comprise less than 0.1% of the hypertensive population; however, approximately 90% of all patients with pheochromocytoma have associated hypertension [2]. Classic presentation of pheochromocytoma consists of a triad of symptoms, including headaches, diaphoresis and palpitations. The gold standard of treatment for pheochromocytoma is elective surgical resection after an appropriate, usually 1-2 weeks, course of anti-hypertensive therapy. Pheochromocytoma multisystem crisis (PMC) was a term first described in 1988 [3]. This rare and potentially fatal entity consists of a tetrad of symptoms including haemodynamic instability and collapse, encephalopathy, hyperthermia and multi-organ failure. PMC is not synonymous with hypertensive crisis; patients with PMC typically tend to have very labile blood pressures ranging from severe hypotension to severe hypertension (e.g. 60-250 mm Hg systolic). The treatment of PMC remains controversial, as there is no consensus among clinicians regarding the appropriate timing of adrenalectomy in the specific setting of pheochromocytoma crisis. This chapter will address the clinical presentation of PMC, pathophysiology of pheochromocytoma, causes of PMC and a description of medical versus surgical treatment. Finally, the evidence regarding emergency adrenalectomy to treat PMC compared with medical management will be discussed. Clinical presentation Pheochromocytoma has been termed the 'great mimicker' because it presents in a non-specific way that may be mistaken for other clinical entities. Patients presented with the classic triad of pheochromocytoma (i.e. headaches, palpitations and diaphoresis) may initially be given the misdiagnosis of migraine headaches or psychiatric conditions such as acute anxiety or panic attacks. This clinical situation can be particularly dangerous because some medications used for treatment (i.e. β-blockers) may induce paroxysms of severe hypertension and subsequent pheochromocytoma crisis. Some clinicians advocate that in cases with anxiety and/or migraines, such patients should undergo formal screening for pheochromocytoma because the treatment of the former conditions may precipitate a crisis [4]. PMC consists of a constellation of symptoms that can also resemble other life-threatening conditions and can be difficult to diagnose if the patient is not already known for pheochromocytoma. PMC, which consists of haemodynamic instability with either severe hypotension or hypertension, labile hypertension, hyperthermia (≥40°C), encephalopathy and multi-organ failure, can be confused with other diagnoses such as septic shock, thyroid storm and malignant hyperthermia. This complex can be deleterious to every organ system, resulting from excess norepinephrine secretion causing extreme vasoconstriction, but also due to vasodilatation from excess epinephrine secretion and volume contraction, with a subsequent low-flow state. Encephalopathy may occur secondary to severe hypertension or direct effects of catecholamines on the brain. Other neurologic manifestations of PMC include cerebrovascular accidents and seizures. Cardiac complications are numerous and may include cardiomyopathy, myocarditis, myocardial ischemia and necrosis secondary to coronary vasospasm, congestive heart failure, cardiac arrhythmias and cardiogenic shock. Pulmonary manifestations include pulmonary edema and acute respiratory distress syndrome. Patients with PMC may also present with acute liver failure, acute kidney injury, disseminated intravascular coagulation, lactic acidosis, diabetic ketoacidosis and rhabdomyolysis. Gastrointestinal manifestations include paralytic ileus and intestinal ischemia secondary to vasoconstriction. Vascular complications can include peripheral thrombosis, embolism and vasospasm [5,6]. Pathophysiology Pheochromocytomas arise from the chromaffin cells of the adrenal medulla. Chromaffin cells produce catecholamines, and pheochromocytomas can produce up to 27 times the synthetic capacity of the normal adrenal medulla. This high rate of production causes accumulation of catecholamines and their metabolites, metanephrines, in the cytoplasm of the chromaffin cells, which then diffuse out of the cells into the vascular system [2]. Tumour size directly correlates with levels of catecholamine secretion with smaller tumours secreting fewer hormones than larger tumours, whereas larger tumours reported to have wider variability of hormone secretion [7]. Most pheochromocytomas produce epinephrine and norepinephrine, which both act on G-protein coupled adrenergic receptors [8]. Norepinephrine acts on α-1-adrenergic receptors that are located on smooth muscle cells within peripheral arteries and veins, causing vasoconstriction; α-2-adrenergic receptors, located on the presynaptic surface of sympathetic ganglia, cause coronary vasoconstriction and peripheral arterial dilatation; and β-1-adrenergic receptors located on cardiomyocytes, cause positive inotropic effects, as depicted in Figure 1. Activation of β-1-adrenergic receptors also causes increased secretion of renin, which increases the mean arterial pressure. Epinephrine primarily acts on β-1-and β-2-adrenergic receptors. Activation of β-2-adrenergic receptors leads to vasodilatation of arteries as well as increased secretion of norepinephrine by the sympathetic ganglia. Depending on the catecholamine secretory profile of the tumour, pheochromocytomas can have different clinical manifestations. Most pheochromocytomas secrete more norepinephrine than epinephrine; however, they can secrete both hormones or secrete epinephrine alone. Severe hypertension may develop because of vasoconstriction from excess norepinephrine secretion whereas severe hypotension may result from widespread vasodilation caused by excess epinephrine secretion. Other mechanisms have been postulated to explain these changes in blood pressure. One of the explanations is that tumour necrosis may cause overwhelming tumour cell death and an abrupt cessation of catecholamine secretion, thereby leading to severe hypotension. However, it has also been postulated that tumour cell death may lead to cell lysis and subsequent massive release of catecholamines and severe hypertension. It is unclear which pathophysiologic mechanisms are responsible for the haemodynamic instability associated with PMC, but each mechanism likely contributes to the overall clinical picture. Causes Pheochromocytomas can cause sustained hypertension if there is continuous secretion of catecholamines, but can also cause paroxysmal hypertension with associated symptoms. If the paroxysm is severe, it may precipitate PMC, as reviewed in Table 1. PMC can occur spontaneously, if there is necrosis or haemorrhage of the tumour itself or if there is any source of external pressure on the tumour. Changes in body position, for example, even something as benign as rolling over in bed, may induce PMC [2]. Vigorous exercise, especially if it involves bending and lifting, may also precipitate PMC, as well as any kind of trauma. PMC may also occur in the perioperative period, in the setting of adrenalectomy or any other operative indication. PMC can be triggered by certain anaesthetic agents upon induction of general anaesthesia, upon incubation, bladder catheterization, surgical skin incision, establishment of pneumoperitoneum and surgical manipulation of the tumour itself [9]. Anxiety and stress may also trigger an episode of PMC. Certain foods, such as aged cheeses, beer, wine, meats, fish, bananas and chocolate, especially those containing tyramine, have been reported to induce PMC [2]. Finally, many medications have been associated with PMC, including β-blockers, glucocorticoids, metoclopramide, various anaesthetic agents, tricyclic anti-depressants, MAO inhibitors, opiates, methyldopa, nicotine, cocaine and certain radio contrast media. The use of nonselective β-blockers causes unopposed activation of α-adrenergic receptors, thus exacerbating vasoconstriction and worsening hypertension. Glucocorticoid administration may cause PMC by stimulating catecholamine release from the tumour itself and also by potentiating the effects of catecholamines at the level of the endothelial and smooth muscle cells in the peripheral vasculature [10]. Metoclopramide may cause PMC by stimulating catecholamine release by acting on serotonin type 4 receptors [11]. Any anaesthetic agent that induces catecholamine surges or histamine may precipitate PMC and may include: ketamine, which has sympathomimetic effects; succinylcholine, which can cause catecholamine surges and stimulation of autonomic ganglia, as well as possibly causing mechanical stimulation via muscle fasciculations in close proximity to the tumour; pancuronium, atropine and inhalational anaesthetics such as halothane, which is arrhythmogenic and desflurane, which is a sympathomimetic drug [9]. Special considerations should be made for pheochromocytoma in the context of pregnancy, as there may be adverse effects to both mother and foetus. PMC can be triggered by increased intra-abdominal pressure during gestation and normal labour and delivery, normal foetal movements or tumour compression during labour. PMC almost inevitably occurs with vaginal delivery, and for this reason, pregnant patients with pheochromocytoma in the antepartum period should be delivered by Caesarean section. Depending on when the diagnosis of pheochromocytoma is made, the patient should undergo laparoscopic resection in the first or second trimester, or at time of Caesarean section after delivery. Unrecognized pheochromocytomas have been associated with very high incidences of morbidity and mortality, with reported values of 40% for maternal mortality and 56% for foetal mortality [2]. While maternal catecholamines do not cross the placenta, they can cause uteroplacental insufficiency and subsequent foetal demise [2]. Treatment options Medical management of pheochromocytoma is necessary prior to surgical resection. For PMC, every attempt should be made to control labile blood pressure to reduce or stop the progression of symptoms and thereby stabilize the patient. Many different classes of anti-hypertensive agents can be used to treat hypertension in pheochromocytoma preoperatively before elective adrenalectomy. Intravenous agents such as phentolamine, a parenteral, short-acting α-adrenergic blocker; nitroprusside; nitroglycerin; nicardipine, a calcium-channel blocker; atenolol or esmolol, β-adrenergic blockers and magnesium sulphate have all been shown to effectively treat hypertensive crisis. Intravenous lidocaine is also used to treat cardiac arrhythmias seen in PMC. The first-line agents are α-adrenergic blockers, the most common of which is phenoxybenzamine, which is a non-selective blocker with a long half-life. Phenoxybenzamine decreases blood pressure, but may also increase the risk of tachycardia and decrease the risk of cardiac arrhythmias; this mechanism of action is achieved by blocking α-adrenergic receptors and not by decreasing the synthesis of catecholamines. Selective α-blockers are also used, including doxazosin and prazosin, which are as effective at treating haemodynamic instability as the non-selective α-blockers. These agents are associated with less reflex tachycardia and less post-operative hypotension than non-selective α-blockers. Calcium channel blockers (e.g. nifedipine, verapamil or diltiazem) are better tolerated by patients than α-blockers; however, they are less effective therefore not usually a first-line choice. Angiotensin-converting enzyme inhibitors and angiotensin receptor blockers have also been used to control hypertension in pheochromocytoma, but not as first-line agents and they usually used in combination with other classes of medications to more effectively control blood pressure. Only β-adrenergic blockers are typically administered after α-blockers have been started, and they are used specifically to treat persistent tachycardia. Non-selective β-blockers should not be used in the treatment of pheochromocytoma because of their effects on β-2-adrenergic receptors, which inhibit vasodilatation and worsen hypertension. Instead, selective β-blockers should be prescribed at low doses as they act solely on β-1-adrenergic receptors, thereby decreasing heart rate. Alpha-methyl-para-tyrosine can also be used to treat hypertension in pheochromocytoma, as it interrupts the first step in the biosynthesis of catecholamines by inhibiting the enzyme tyrosine hydroxylase. However, this drug has severe adverse effects that include psychiatric disturbances, extrapyramidal symptoms, sedation and urolithiasis, and its use, therefore, is generally reserved for patients with malignant or metastatic pheochromocytoma. Surgical resection remains the definitive treatment for pheochromocytoma. Laparoscopic transperitoneal adrenalectomy is most commonly performed; however, other operations may be used, such as the lateral retroperitoneal, posterior retroperitoneal and transthoracic surgical approaches. Successful adrenalectomy requires close communication between the surgical team and the anaesthesiology team, especially at the time of adrenal vein dissection and division, as the patient may develop profound hypotension once the vein is ligated. Tumour and adrenal gland manipulation should be minimized until after the vein is clipped. Timing of surgical resection in the context of PMC is very controversial, and there is no clear consensus among clinicians as to whether emergency adrenalectomy is indicated and/or is considered safe for PMC. The following section will review the literature pertaining to the treatment options, decision-making and outcomes in PMC. Initial management: when is it appropriate to operate? The treatment of PMC has traditionally consisted of immediate medical stabilization followed by emergent or urgent adrenalectomy. There are three treatment options in the case of PMC, including: (1) emergent adrenalectomy, i.e. once the diagnosis of PMC is made, the patient proceeds directly to surgery; (2) urgent adrenalectomy, i.e. the patient's haemodynamic status is first treated medically with a short course of α-blockade prior to adrenalectomy, usually within 7-10 days of presentation and within the same hospital admission and (3) elective adrenalectomy, i.e. planned surgery following initial medical stabilization and discharge from hospital. The tendency for emergency adrenalectomy was based on anecdotal evidence from published case reports suggesting that medical management alone led to poorer outcomes. In 1980, one group recommended that only brief attempts should be made to stabilize the patient's haemodynamic and that 'procrastination' prior to operative intervention would lead to 'irreversible shock, renal failure and death' [12]. In their series, two patients who presented with 'acute pheochromocytoma' both died, one in the post-operative period and one in whom the diagnosis of pheochromocytoma had not been established. Another group published a case series that included three cases of PMC [13]. One patient's hypertensive crisis was successfully controlled prior to operative intervention, however, upon development of a fever of 40°C, a septic workup was initiated and the patient became encephalopathic, leading to respiratory distress and had fatal cardiac arrhythmias while awaiting surgery. The second patient in their series presented with syncope and quickly developed multisystem organ failure despite adequate blood pressure control with multiple α-and ß-blocking agents. Urgent adrenalectomy was eventually performed 4 days after hospital admission. The operation was successful, but the patient's post-operative course was prolonged and she was left with long-term sequelae of her encephalopathy, including quadriplegia and dysarthria. The third patient in the case series also presented with hypertensive crisis with rapid deterioration to multisystem organ failure, and underwent emergency adrenalectomy on the seventh day following admission for refractory multi-organ failure. Surgery was successful, and the patient's multisystem crisis resolved post-operatively. More recently, several case reports have been published that also support urgent operative intervention in the setting of PMC. In 2008, a study described a case of PMC upon induction of general anaesthesia in the context of elective adrenalectomy for a known pheochromocytoma, despite preoperative α-blockade [14]. Surgical resection was aborted, and the patient was transferred to the authors' institution, where he remained in the intensive care unit for 6 days for aggressive medical stabilization followed by urgent adrenalectomy. The patient eventually recovered from his multisystem organ failure and discharged from the hospital 1 month later. In 2010, a reported case of PMC was described that initially presented with acute respiratory failure and encephalopathy, in which surgical resection was performed 11 days after admission because of the patient's progressive and uncontrollable medical deterioration [5]. Post-operatively, the patient's condition improved almost immediately, and the multisystem organ failure resolved except for chronic renal failure requiring long-term haemodialysis. In another case report of a patient who presented with acute heart failure with cardiogenic shock refractory to inotropic pharmacotherapy, insertion of an intra-aortic balloon pump was required, and extracorporeal membrane oxygenation (ECMO) was considered. The treating physicians, however, elected to proceed with emergent adrenalectomy. The patient's haemodynamic and respiratory status greatly improved shortly after surgery [15]. Several other authors have challenged the notion that the only viable option for the treatment of PMC is emergency adrenalectomy. Elective adrenalectomy following intensive medical stabilization has been shown in several case reports. In a 12-year-old child, with severe dilated cardiomyopathy, secondary to excess catecholamine secretion from a pheochromocytoma was treated with anti-hypertensives, specifically phenoxybenzamine and α-methyl-paratyrosine, for 7 months prior to surgical resection [16]. Cardiac function improved moderately with medical management alone in the preoperative period and normalized completely post-operatively. Another report described two cases of PMC resulting in respiratory failure requiring incubation and ventilation, and acute kidney injury requiring continuous venovenous haemodialysis [4]. Adrenalectomy was performed at least 1 month after initial presentation, following medical stabilization and maintenance. The case reports above describe the clinical presentation of PMC in detail, but there is very limited data regarding the perioperative management of PMC and whether it is preferable to proceed with emergency surgery with preoperative α-blockade versus medical management alone in the immediate period of crisis, followed by hospital discharge and elective adrenalectomy. Current review of the literature only reports anecdotal evidence and consequently subject to publication bias. However, a recent retrospective chart review of PMC cases at their institution as well as a literature review of all cases of PMC that underwent adrenalectomy was performed [6]. The authors reviewed medical charts from March 1993 to October 2011 of all patients who underwent adrenalectomy for a diagnosis of pheochromocytoma or paraganglionoma confirmed by pathology. They defined pheochromocytoma crisis as severe hypertension or hypotension resulting in end-organ damage, and found that 25 of 137 patients presented with crisis. None of the patients in their series underwent emergency surgery without initial α-blockade. All but one patient was stabilized with phenoxybenzamine prior to adrenalectomy. Ten patients underwent urgent adrenalectomy during the same hospital admission, whereas the other 15 patients were discharged from hospital and returned for elective adrenalectomy. There were no mortalities in either group, but the major clinical significant difference was that there was an increased use of intra-aortic balloon pumps, higher incidence of preoperative ICU admissions for crisis, higher post-operative complication rate, increased post-operative ICU admissions and longer post-operative length of stay in the urgent surgery group. In their literature review, the authors found 97 patients who underwent adrenalectomy for PMC. In this group, they identified three different management options: emergency surgery without prior α-blockade, urgent surgery with α-blockade and medical stabilization and elective surgery post-discharge after medical therapy to initially treat the crisis. The combined data for patients undergoing elective and urgent surgery were compared to the emergency surgery group. The most striking significant difference between these groups was the mortality rate, which was found to be 18% in emergency surgery patients compared to 0% in the elective/ urgent surgery patients. There were other statistically significant differences, such as increased preoperative diagnosis of pheochromocytoma in the elective/urgent patients, higher incidence of tumour haemorrhage or rupture in the emergency surgery patients, higher incidence and longer duration of preoperative α-blockade in the elective/urgent surgery patients, higher rate of laparoscopy in the elective/urgent surgery patients and increased risks of both intra-operative and post-operative complications in the emergency surgery patients. Currently, this is the only large-scale study available regarding the management of pheochromocytoma crisis. Based on their experience from their own institutions, it appears feasible and safe to attempt medical therapy and elective adrenalectomy if the patient can be discharged safely from hospital, as outcomes are better for patients undergoing elective compared to urgent surgical resection during the same admission. From the data available in the literature, it is quite clear that emergency adrenalectomy without adequate preoperative αblockade is associated with high morbidity and mortality in the treatment of PMC. It is therefore recommended to offer urgent adrenalectomy in those patients who are able to partially recover under intensive medical management, while elective adrenalectomy can be reserved for patients who fully recover with medical management and who can safely be discharged from the hospital. Ideally, adrenalectomy should be planned within 4-6 weeks following discharge. This study is limited in that it is a retrospective review, but since PMC is such a rare clinical entity, it would be very unlikely that a prospective, randomized study could ever be carried out [6]. Nevertheless, it seems clear that emergency adrenalectomy should be discouraged as an initial treatment of PMC and that medical therapy and eventual urgent or elective surgery should be the preferred management if the patient's condition allows it. A flow diagram for decision-making in patients with pheochromocytoma crisis is shown in Figure 2. Conclusion There has been a paradigm shift in the surgical management of PMC, from performing emergency adrenalectomy immediately after the diagnosis to now favouring medical stabilization followed by elective adrenalectomy in more controlled and ideal situation, but allowing for urgent adrenalectomy in the same hospital admission if necessary. There are currently no guidelines available or Level 1 evidence to support this change in practice, and randomized studies would be impractical to perform due to the rare presentation of this clinical entity. Further retrospective studies with larger sample sizes may be helpful in discerning the clinical outcomes of different management strategies and making a stronger recommendation for the preferred treatment of PMC. Author details Tanya Castelino and Elliot Mitmaker* *Address all correspondence to: elliot.mitmaker@mcgill.ca McGill University Health Centre, Montreal, Canada
2019-08-19T11:41:24.076Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "bb5ce523fcd6cc60d566144dd4c93ced09534677", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/55596", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "4e77d30df6aaf06e032902f681454f5a17fabeb1", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
252887595
pes2o/s2orc
v3-fos-license
Performance of the SD Bioline rapid diagnostic test as a good alternative to the detection of human African trypanosomiasis in Cameroon Background Case detection is essential for the management of human African trypanosomiasis (HAT), which is caused by Trypanosoma brucei gambiense. Prior to parasitological confirmation, routine screening using the card agglutination test for trypanosomiasis (CATT) is essential. Recently, individual rapid diagnostic tests (RDTs) for the serodiagnosis of HAT have been developed. Objective The purpose of this study was to evaluate the contribution of SD Bioline HAT to the serological screening of human African trypanosomiasis in Cameroonian foci. Methods. Between June 2014 and January 2015, blood samples were collected during surveys in the foci of Campo, Yokadouma, and Fontem. The sensitivity (Se) and specificity (Sp) of SD Bioline HAT were determined using the CATT as the gold standard for the detection of specific antibodies against Trypanosoma brucei gambiense. Results A total of 88 samples were tested: 59.1% (n=52) in Campo, 31.8% (n=28) in Yokadouma, and 9.1% (n=8) in Fontem. There were 61.4% (n=54) males and 38.4% (n=34) females, and the average age was 35.4 19.0 years. In probed foci, the overall seroprevalence was 11.4% (95% confidence interval: 6.3-19.7) with the CATT method and 18.2% (95% confidence interval: 11.5-27.2%) with the SD Bioline HAT RDT method. The SD Bioline HAT’s Se and Sp were 80.0% and 89.7%, respectively. Conclusions This study demonstrated that the overall performance of the SD Bioline HAT was comparable to that of the CATT, with high specificity in the serological detection of HAT. INTRODUCTION H uman African trypanosomiasis (HAT), sometimes known as sleeping sickness, is a parasitic disease transmitted by vectors that is endemic in many sub-Saharan African nations. 1 It is caused by a flagellated protozoan of the genus Trypanosoma, which is transmitted naturally to humans by the Tsetse fly. HAT is one of the neglected and lethal tropical illnesses, with its Rhodesian form caused by Trypanosoma brucei rhodesiense and its Gambian variant caused by Trypanosoma brucei gambiense. 2,3 It is characterized by a clinical presentation that is non-specific and lacks pathognomonic symptoms. 4 HAT is a cause for concern in intertropical Africa, where it is on the rise. 5 In 2014, the World Health Organization (WHO) recorded 3,797 cases of HAT and in 2015, 2,804 additional cases. The illness was expected to have caused 3,500 fatalities in 2015. Sustained control efforts have reduced the number of new cases, as 992 and 663 cases were reported in 2019 and 2020, respectively, and more than 95% of reported cases were attributable to T. b. gambiense. The Democratic Republic of the Congo (DRC) is the most impacted nation, with over 75% of the gambiense cases notified. 8 The care of the Gambia form is based on the discovery of cases, followed by the administration of the most appropriate medication based on the disease's stage. 9 The typical approach for mass screening is the CATT 10 ; however, its employment during active and passive screening is restricted under specific situations. 11 Consequently, new tactics for the battle and management of HAT have been developed. In several countries, rapid diagnostic tests (RDTs) for the identification of HAT have been developed and reviewed in recent years. 12,13 With the WHO's objective of eradicating HAT by 2030, these tests offer an alternate option for routine screening in endemic areas' health care institutions. For these RDTs to be effectively disseminated, more research must be conducted on their performance. The National Program for the Control of Human African Trypanosomiasis (PNLTHA) is actively studying cases in three foci (Campo, Fontem, and Yokadouma) of HAT in Cameroon. Ethical considerations All necessary precautions were taken to ensure that the rights and freedoms of the participants in the research were respected. In order to conduct the present study, the ethical clearance N o 2015/0003 was sought and obtained from the institutional ethics committee for research for human health of the school of health sciences (Yaoundé). Study design We conducted a cross-sectional study over a period of nine months, from June 2014 to March 2015. The samples were collected in three foci: Campo in the South, Yokadouma in the East, and Fontem in the Southwest of Cameroon. These foci are geographically propitious zones for hatching Tsetse flies: a temperature of about 25 • C and a relative humidity of 80 to 85% and a lot of shade. 14 The participants in the study were the inhabitants of the foci and the refugees from the Central African Republic residing in the Yokadouma camp. However, pregnant women and infants were not included in our study. Sampling was done by simple random sampling. It consisted in drawing lots directly from individuals in the population of the various foci surveyed. Participant enrolment process Our work was conducted during the various surveys organized by the national HAT control program of Cameroon. They were done in several stages, ranging from the census to prior awareness of the target population by the field team. We then approached each participant by presenting the information notice, explaining to him in simple terms the purpose of the study, the interest, the amount of blood to be collected and the samples and results management. Supplementary information The online version of this article (Tables/Figures) contains supplementary material, which is available to authorized users. Anyone who understood and accepted the conditions of the study gave their consent by signing the informed consent form. After this stage, each participant was registered and then taken to perform the different CATT and rapid diagnostic screening tests. At the end, the results were handed to them individually. Blood sampling consisted in taking from each participant, on the fingertip, about 200 µl of blood. This blood was stored in heparin microcapillary tube and classified on racks numbered from 1 to 10 for the first step of screening. However, for all individuals who tested positive in the first step of screening, whole blood was collected from the elbow in a 4 ml heparinised tube for further testing (CATT dilution and RDTs). Tests performed The samples were analysed both in the field and at the HAT laboratory at the Organisation for Coordination of the Control of Endemic Diseases in Central Africa, Yaoundé. Two types of tests were performed for each sample: the agglutination test for trypanosomiasis and the rapid diagnostic test. Card Agglutination Test for Trypanosomiasis The Card Agglutination Test for Trypanosomiasis (CATT, Figure 1) is a direct plate agglutination test. It consists in bringing together antigens of trypanosomes, consisting of whole and lyophilized T. b. gambiense; and the whole blood of the person to be examined. It is an interaction between a specific agglutinating antibody and a particular antigen. A drop of reconstituted reagent (about 45 µl) was deposited on the card, a drop of blood (about 25 µl) was added. The mixture was then spread, the card was placed five minutes on a rotary shaker at one revolution per second. The reading was done immediately after the 5 minutes of stirring, with the naked eye with reference to the positive and negative controls. Quantitation was performed on all positive CATT whole blood samples. It consisted in taking 100 µl of whole blood for successive dilutions (1:2, 1:4, 1: 8, 1:16) with 100 µl CATT buffer each time. The titration was done by taking 25 µl of each blood dilution that we put in the test area on the card, before adding a drop of reconstituted reagent. For the rest we proceeded as for the CATT test. This quantification was done in order to determine the positivity threshold of each sample. The rapid diagnostic test The rapid diagnostic test SD Bioline HAT is an immunochromatographic test for the rapid and qualitative detection of antibodies named Litat1.3 and Litat1.5, which are specific for T. b. gambiense. The procedure used was according to the manufacturer's instructions (Figure 2). A drop of whole blood was placed in the round window of the test. Subsequently, we added four (4) drops of diluent to this spot, the result was read within 15 minutes of the deposit. The result was negative when a single-coloured band was observed on "C" in the result window. It was positive when two coloured bands were observed, either on line 1 and control line "C" (positive to Litat 1.3), or on line 2 and control line "C" (positive to Litat 1.5) or again when three coloured bands appeared in the control, 1 and 2 respectively, this means a positive result in Litat 1.3 and 1.5. The result was invalid when the control band "C" did not appear, regardless of the other results observed. Statistical analysis Data were analysed using R.3.1.1 software. This analysis was used to calculate prevalence (p), sensitivity (Se) and specificity (Sp), positive predictive value (PPV) and negative predictive value (NPV). A P value of less than 0.05 was significant. ©PAGEPRESS PUBLICATIONS MANAGEMENT OF HAT sitivity of the CATT, successive dilutions of whole blood were performed. The threshold of suspicion of a case of HAT in Cameroon is represented by a positive CATT from 1:16 of diluted blood (PNLTHA, Cameroon). 15 Based on this national algorithm, our results suggested that the overall seroprevalence with the CATT method was 11.4% (95% CI: 6.3-9.7) in the probed foci (10/88). Performance of rapid tests for HAT case detection The performance of SD Bioline HAT has been calculated taking the CATT as the reference. We observe that, as the dilutions are made, the sensitivities of the RDT increase while the specificities decrease. The threshold for CATT positivity for blood dilution is 1:16 in Cameroon. At this threshold, the SD Bioline HAT showed a sensitivity of 80.0%, a negative predictive value of 97.22% and the p-value of each was greater than 0.05 indicating a non-significant difference between these values and those of the CATT. At the same threshold, the specificity was 89.7% while the positive predictive value was 50.0% and the p-value of each was less than 0.05, showing a significant difference between the specificity of the CATT and the SD Bioline HAT is statistically. Table 1 shows the values of the intrinsic (Se and Sp) and extrinsic (PPV and NPV) performance indicators of SD Bioline HAT. DISCUSSION Our primary objective was to evaluate the contribution of the rapid diagnostic test to the screening of HAT cases in Cameroon for T. b. gambiense. In addition to whole blood, we performed four dilutions of the CATT (1:2, 1:4, 1:8, 1:16) as a reference test. We utilized this RDT for HAT screening because it is currently available and distributed by the institute of tropical medicine. To evaluate this test, samples were collected in Cameroon's Campo, Yokadouma, and Fontem outbreak foci. Campo and Fontem are recognized and active HAT foci in Cameroon, whereas the Yokadouma residence is a suspicious and silent focus. 16,17 The investigation in the latter focus (Yokadouma) tracks the migration of refugees fleeing political instability in the Central African Republic (CAR), the majority of whom are from Nola, an active and well-known HAT focus in the CAR. 18 CATT analyses included 88 individuals, the majority of whom were sampled during the Campo outbreak and the remainder during the Yokadouma and Fontem outbreaks. The prevalence of serum HAT (CATT-positive on whole blood) was 36% overall. It was reduced to 11.4% by taking into account the dilution of whole blood to 1:16, which corresponds to Cameroon's suspicion threshold for HAT cases requiring surveillance. 15 It should be noted that for HAT screening, the WHO recommends dilutions of positive whole blood CATT tests in order to eliminate the possibility of false positives. Anyone in Cameroon with a diluted blood CATT 1:16 is considered serologically positive for HAT and is being monitored by PNLTHA. This threshold is set at 1:8 in Chad and the Central African Republic, and 1:4 in the Democratic Republic of the Congo to eliminate false positives. 15 False positives on the CATT test may be caused by cross-reactions following exposure to other pathogens, such as animal trypanosomes: T. b. brucei and T. congolense. 19 Schistosomiasis, filariasis, and toxoplasmosis have been demonstrated to be capable of agglutinating CATT at low titres. 20,21 During our fieldwork, we also observed the presence of microfilariae in the ganglion fluid of a CATT-positive whole blood sample. The overall blood test positivity rate was 18.2% when SD Bioline HAT was used as an alternative screening tool. However, at a blood dilution threshold of 1:16, we obtained a prevalence of 11.4% that was closer to the SD Bioline HAT test prevalence of 18.2% than with undiluted blood. These values indicate a close correlation between the detection capabilities of CATT blood 1:16 and SD Bioline HAT. SD Bioline HAT's sensitivity is measured by its ability to produce positive results in all individuals who have had contact with T.b.gambiense and have blood antibodies against Litat 1.3 and Litat 1.5. All individuals who have never been in contact with T. b. gambiense have exhibited negative responses to the specificity of its capacity. Using the CATT ©PAGEPRESS PUBLICATIONS DEUTOU WONDEU ET AL. as a benchmark, these two indicator values of the performance of RDTs to identify serological cases of HAT were calculated. In contrast to specificity, the sensitivity of this RDT increased as CATT was diluted. At the CATT blood threshold of 1:16, the Se percentage was 80.0% and the Sp percentage was 89.7%. These values indicate that SD Bioline HAT performed satisfactorily when investigating suspected cases of HAT. In 2014, Sternberg et al. found Se = 82% and Sp = 97% by evaluating the performance of SD Bioline HAT and two prototype RDTs on 500 samples, including 250 cases and 250 controls from Angola, Central African Republic, and Uganda. 22 In addition, Bisser et al. obtained a sensitivity of 87.8% in the evaluation of the optimization of the same test using 49 confirmed parasitological THA specimens and a specificity of 93-95% after evaluating the SD Bioline HAT and the SD Bioline optimized using 399 control samples in active screening in the DRC. 23 Comparing these RDTs with CATT diluted 1/8 yielded a sensitivity of 89.3 percent as well. The sensitivity of SD Bioline HAT, which was observed to be 80% in this study, was slightly lower than the manufacturer-reported value (98%). Our small sample size in comparison to theirs could account for the differences. In addition, no parasitological cases were confirmed. Limitations of the study This study focuses on HAT disease, which is a rare and neglected tropical disease in Cameroon. The main limitations of this study were limited access to different collection foci, located in remote areas, small sample size, as well as financial limitations for the acquisition of rapid diagnostic tests. CONCLUSIONS The study in three HAT foci in Cameroon revealed 18.2% of serological cases with the CATT method and 11.4% with SD Bioline HAT. The performance of SD Bioline HAT RDT compared to the CATT method shows that SD Bioline HAT could be an alternative to be adopted in most HAT foci in Cameroon. Considering that these foci are in remote areas, without appropriate infrastructure and technical platforms for perform CATT.
2022-10-14T15:35:00.243Z
2022-09-07T00:00:00.000
{ "year": 2022, "sha1": "6d3e8cb3676a8680541e3293c6f6e75d75c40617", "oa_license": "CCBYNC", "oa_url": "https://publichealthinafrica.org/jphia/article/download/1066/750", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "78cc915c0fe47e5720fcca60a5fe5744aef43302", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
2303854
pes2o/s2orc
v3-fos-license
Population decline is linked to migration route in the Common Cuckoo Migratory species are in rapid decline globally. Although most mortality in long-distance migrant birds is thought to occur during migration, evidence of conditions on migration affecting breeding population sizes has been completely lacking. We addressed this by tracking 42 male Common Cuckoos from the rapidly declining UK population during 56 autumn migrations in 2011–14. Uniquely, the birds use two distinct routes to reach the same wintering grounds, allowing assessment of survival during migration independently of origin and destination. Mortality up to completion of the Sahara crossing (the major ecological barrier encountered in both routes) is higher for birds using the shorter route. The proportion of birds using this route strongly correlates with population decline across nine local breeding populations. Knowledge of variability in migratory behaviour and performance linked to robust population change data may therefore be necessary to understand population declines of migratory species and efficiently target conservation resources. M igratory species are increasingly threatened across the globe 1 driven by climate change 2,3 , habitat change and habitat loss 4,5 . Understanding where mortality occurs during their annual cycles is therefore increasingly important, especially for organisms such as long-distance migrant land birds, which show some of the steepest population declines 4,5 . The breeding behaviour of one such species considered here, the obligate brood parasite Common Cuckoo, has been extremely well studied but its migration routes have been very poorly known until now 6 . As migration periods have been implicated as the stage of the annual cycle during which most mortality occurs in migrants 7,8 , understanding the variability of migration routes and associated patterns of mortality in these birds is vital to untangling the causes of their population declines. Until very recently, most migratory birds have, however, been too small to carry the tracking devices that have been available. Consequently, study of their seasonal mortality and migration routes has so far been restricted to indirect 7 and strongly biased 9 methods, and evidence of migration mortality impacting on breeding populations has been lacking, despite its potential to do so 10 . Miniaturization of archival light-level recorders, known as 'geolocators' 11 , has recently allowed us to re-construct the migratory paths of some smaller species, providing insight into their migration strategies 12 and how environmental conditions can constrain them 13 . However, as data are restricted to individuals that successfully return to allow data retrieval, these devices do not allow the study of patterns of mortality. Recent availability of platform transmitter terminals (PTTs) as small as 5 g, however, has enabled us to track the migrations and mortality of the largest nocturnally migrating long-distance migrant land birds, such as Common Cuckoos, in close to real time 14 . Detailed monitoring of breeding birds in the United Kingdom 15,16 has shown that this species population is rapidly declining, but that population trends vary locally. This enabled us to track birds from widely spaced localities with a range of cuckoo population trajectories, to test whether there is a relationship between migratory behaviour and the degree of local population change. We found that the birds used two distinct routes to reach the same wintering grounds, allowing assessment of survival during migration independent of origin and destination. Mortality up to completion of the Sahara crossing (the major ecological barrier encountered in both routes) was higher for birds using the route that was shorter up to that point. The proportion of birds using this route strongly correlated with population decline across nine local breeding populations. These results demonstrate that mortality during migration can impact breeding populations of long-distance migrants. Consequently, knowledge of variability in migratory behaviour and performance linked to robust population change data may be necessary to understand population declines of migratory species and to efficiently target conservation resources. Results Migration routes and survival. We tracked cuckoos from nine widely spaced localities within the United Kingdom, which varied in major habitat type and probably cuckoo host species, as well as cuckoo population trajectories. Two localities were in the Highlands of Scotland (Kinloch & Skye and Trossachs National Park), one in the uplands of central Wales (Tregaron), three across the south of England (Dartmoor, New Forest and Ashdown Forest), one in the East Midlands (Sherwood Forest) and two in East Anglia (Thetford Forest and Norfolk Broads) ( Fig. 1 and Supplementary Tables 1 and 2). The birds used two distinct routes during migration from the United Kingdom, heading either Southwest (West route) via Spain or Southeast (East route) via Italy or the Balkans (Fig. 2a), with very high individual route consistency between years ( Supplementary Fig. 1). However, all used the same Central African wintering grounds (Fig. 2a,b and Table 1). As local breeding populations varied strongly in their composition of West and East migrating individuals (Fig. 1), we were able to assess differences in mortality associated with the two migration routes, while controlling for both area of origin and destination. Estimated apparent survival rates across both routes were 0.799 (95% confidence limits 0.615-0.908) up to completion of the Sahara crossing, 0.776 (95% confidence limits 0.604-0.807) during remainder of autumn migration and 0.675 (95% confidence limits 0.448-0.841) from completion of autumn migration up to return to the United Kingdom. Up to completion of the Sahara crossing, the survival of birds taking the West route was lower than that of bird taking the East route (binomial generalised linear mixed model (GLMM); df 1, 10, F ¼ 7. 13 birds) or from completion of autumn migration up to return to the United Kingdom (binomial GLMM; df 1, 6, F ¼ 1.52, P ¼ 0.263), but sample size was small for the latter (East: 24 migrations by 16 birds, West: 11 migrations by 10 birds). Excluding cases most likely to be tag failure (see Methods) resulted in the removal of one bird from both East and West routes for autumn migration up to completion of the Sahara crossing and the survival of birds taking the West route was still lower than for those taking the East route (binomial GLMM; df 1, 10, F ¼ 7.72, P ¼ 0.019; estimates: East 0.985, 95% confidence limits 0.730-0.999; West 0.596, 95% confidence limits 0.144-0.928). After the removal of one bird from the East route for the remainder of autumn migration, there was still no difference in apparent survival between the routes (binomial GLMM; df 1, 9, F ¼ 0.01, P ¼ 0.938; estimates: East 0.797, 95% confidence limits 0.573-0.920; West 0.786, 95% confidence limits 0.447-0.944). No cases were removed for the period from arrival at the midwinter location up to return to the United Kingdom. Survival to completion of the Sahara crossing, the major ecological barrier encountered in both migration routes, was therefore significantly lower in the West route, which is 12% shorter to that point (Table 1), whereas there was no difference in apparent survival during the remainder of their migrations through tropical Africa. This demonstrates that the costs of migration may be route specific and are not necessarily related directly to migration distance. Migration routes and local population change. Across the nine areas of the United Kingdom in which cuckoos were tagged, the proportion of birds using the West route, associated with lower survival, was correlated with the degree of local breeding population decline assessed using two independent population change data sets-national breeding bird Atlases in 1998-2001 and 2007-11 (ref. 15), and the annual Breeding Bird Survey (BBS) 16 (Fig. 3). Although there was some geographical association of the routes within Britain, with all birds from the upland areas in Scotland and Wales using the East route, the correlation between route use and population change remained for the more robust change data set (see Fig. 3 legend) from the Atlases after controlling for the upland/lowland division (Fig. 3). This provides the first direct evidence that conditions encountered during migration can have an impact on breeding populations. Potential drivers of mortality. Unexpectedly 7 , most of the excess mortality associated with the West route occurred in Europe, before the desert crossing (Fig. 2c). It is therefore likely to be that conditions at stopovers in this area are responsible. Decreasing rainfall in Spain 17 is a possible cause. Increasingly severe droughts and associated wildfires have occurred in recent years, for example, in 2012, when none of three birds taking the West route survived to complete the desert crossing. Other possibilities include large-scale habitat changes or increased predation pressure. Migration strategies are thought to be more energyselected in autumn compared with spring when they are time selected 18 ; thus, autumn migration strategies are likely to have evolved in relation to energy availability at stopovers, making them vulnerable to the effects of environmental change at this time. Birds using the West route left the breeding grounds on average 8 days later and subsequently spent less time in stopovers before crossing the Sahara than birds using the East route (Table 1), possibly undertaking more pre-migratory fattening at the breeding location. Changing conditions on the breeding grounds, such as the declines of the large moths whose caterpillars are the major food source for adult cuckoos, which have been particularly severe in the areas in which birds using the West strategy were tagged 19 , may therefore have an impact on them more than birds taking the East route and would exacerbate the effects of poor conditions on stopovers further south in Europe. A larger sample of individuals and years is, however, required to determine the potentially interactive effects of conditions in both the United Kingdom and on stopovers in mainland Europe on survival of cuckoos using the two routes. Origin and persistence of the two migration routes. The presence of birds within the Unitd Kingdom using two distinct routes to reach the same wintering grounds provided a unique opportunity to assess survival on migration independently of origin and destination. Double colonization after the last ice age by birds with different evolutionary histories, including migration routes, is a probable explanation for the occurrence within England of birds using two different routes. Genetic analysis is required, however, to determine whether England is in fact such a secondary contact zone 20 , although there is some evidence that birds from adjacent parts of mainland Europe, such as Belgium where the population of Common Cuckoos is also in rapid decline, use the West route 21 . Tracking birds from this population would be required to confirm this. Any colonization by birds using the West route may be quite recent, as before this study there was no evidence of birds from the United Kingdom using the West route 9 , as all ringing recoveries have been of birds apparently using the East route that is shared with birds from Scandinavia 22 . Alternatively, use of the West route could be a flexible response to delayed onset of autumn migration. Under this scenario, the route could potentially have arisen due to birds retracing the route of their spring migration. Finally, the West route could have arisen through reversal of the longitudinal component of the East route at a time when it conferred a selective advantage. Such an advantage could come from increased breeding opportunities arising from the later departure date of West birds, because the route is restricted to the geographical area (but not to those habitats) in which Reed Warblers, which continue breeding after male cuckoos have left the United Kingdom, are a major host. The fact that the two migration routes ultimately converge on the same wintering grounds is also very surprising as the use of different routes is usually a precursor to occupation of different wintering areas by different breeding populations 23 (a pattern known as migratory connectivity). The absence of migratory connectivity in this case is possibly a consequence of a restricted winter range forcing birds to occupy the same wintering areathat is, there are no possible equatorial or southern hemisphere wintering areas in West Africa due to the shape of the African continent. Alternatively, wintering location may be dictated by geographical variation in breeding phenology and by the timing of fuelling of spring migration at the wintering grounds 24 , resulting in birds from the same breeding area occupying the same wintering grounds. Why does the western route persist despite the increased mortality of birds using it? One possibility is that the conditions causing the increased mortality may have occurred only recently, or may not be consistent over longer timescales; thus, composition of breeding populations has not yet adjusted. Migratory birds are believed to have relatively inflexible annual migration programmes, implying that tracking of environmental variation may take some time 25 . Alternatively, mortality later in the annual cycle may offset the greater mortality up to this stage but in such a scenario, it is unlikely to be that there would be an association between route use and population change. There was no difference in survival between birds migrating via the two routes during the remainder of their southward migration in Africa (Fig. 2), nor on the return migration from the Central African wintering grounds to the United Kingdom, although sample sizes were small for the latter. As there is no difference in where birds using the two routes spent the midwinter period (Table 1, Fig. 2a), when they arrived there (Table 1) or in the spring migration routes they used (Fig. 2b), they are likely to be exposed to similar pressures during the rest of their annual cycle. Discussion Previous work has demonstrated that constraints on the timing of spring migration can cause population declines in long-distance migratory birds 3,26 , via effects on reproductive productivity 3 . There has been mixed evidence of this for European Common Cuckoo populations 27,28 -future work needs to address the issue of whether there are constraints on the timing of spring migration that could cause a mismatch with breeding resource availability (including host nests in this case) in UK cuckoos. The results presented here, however, demonstrate that conditions during migration can also influence population dynamics of longdistance migrants via effects on survival and emphasize the need for a full annual cycle approach to understanding migratory life cycles 29 . Consequently, understanding variability in migratory behaviour and performance may be vital when attempting to understand population changes of these species. This may help to identify critical areas where stopover quality has declined and to predict the response of species to future climate change, allowing prioritization of conservation resources 30 . Further advances in the miniaturization of real-time tracking devices 31 should therefore result in the provision of extremely valuable information for the pressing concern of migratory land-bird conservation 1,4,5,30,32 , as well as the provision of important opportunities for raising awareness of these declines among the public 14 . Methods Tagging and tracking. Forty-seven adult male (second calendar year and older) Common Cuckoos Cuculus canorus canorus were tagged in Britain between 2011 and 2014, in nine geographically well-spread areas. These areas contrasted in recent local cuckoo breeding population change according to two independent data sets and represented a range of habitat types and therefore distinct communities of cuckoo hosts (Supplementary Tables 1 and 2). The tagging was licensed by the Special Marks Technical Panel operating on behalf of the British Trust for Ornithology and the UK Government's Home Office, and was carried out in accordance with their ethical guidelines. Birds were caught using mist-netting, sound lures (recordings of male and female cuckoo vocalizations) and a dummy (stuffed female Common Cuckoo) placed on a pole. Each individual was fitted with a PTT-100 from Microwave Telemetry weighing o5 g as a backpack using 2 mm diameter flat-braided nylon cord 22 Satellite data. Locations of the PTTs were obtained from the ARGOS system 33 . All data were Kalman filtered before locations being received, with the exception of data received before May 2012 from the five PTTs deployed in 2011. Mirror image locations were removed using the Douglas filter facility in Movebank and checked visually. These sometimes occurred even with Kalman filtering for the first location after the bird had moved a considerable distance from its previous position, but were easily identified based on subsequent locations. Analysis was carried out using a data set produced by selecting the best-quality location per duty cycle using Movebank method 1 (based on location class, error radii and number of messages), from a pool including locations for which the reliability could be assessed (classes 1, 2 and 3) and all other locations passing a plausibility filter based on speed-on movement or mutual validation via proximity (Movebank Douglas filter options: Method DAR; keeplc ¼ 1, mxred ¼ 10, minrate ¼ 120, ratecoef ¼ 25, minoffh ¼ 11 and rankmeth ¼ 1). Mortality. All birds from which transmissions were lost were initially assumed to have died. Some studies 7 have been able to prove bird death in some instances through additional observations such as finding the corpse using final GPS coordinates but this is not possible here due to the lower accuracy of Doppler PTTs. As tagged birds are not colour marked and are rarely seen in the field, proving tag failure through field observations of birds after tracking has ceased is not possible. Some cases of tag failure are therefore likely to be included in the mortality rates for each route. Battery degradation through prolonged habitatmediated shading resulting in charge depletion is the most probable cause of tag failure according to the manufacturer. It is not clear how probable is this during deployment as the tag enters 'sleep' mode requiring minimal power to protect the battery when the charge drops below a threshold. Background rates of tag failure in the absence of shading are estimated to be very low, around 1% in the initial 12 months after deployment (Paul Howey, Microwave Telemetry Inc., personal communication). Although rates of failure are unlikely to differ between the two routes at any point of the annual cycle, the greater chance of poor tag charge leading to loss of contact with a bird in the forest zone south of 6°N, where canopy cover is higher, means it could disproportionately affect apparent survival of birds after the Sahara crossing. Apparent survival was therefore calculated (1) up to completion of the Sahara crossing, (2) from that point until arrival at the stopover location of minimum latitude and (3) from there until arrival back in the United Kingdom. This was to prevent any tag failures during the second period differentially impacting apparent survival of birds from the route with higher survival in earlier periods. In some cases, evidence in support of mortality comes from the following: (1) tag remaining stationary for a prolonged period, especially in inhospitable terrain (for example, open desert) or non-seasonal locations (for example, throughout into the winter in Europe); (2) data from the tag's temperature sensor indicating that it is no longer being buffered from diurnal environmental fluctuations by the bird's body temperature. In both cases, the caveat is that the tag could have become detached from the live bird but this is extremely unlikely with the harness design used. In almost all cases in which contact with a tag is permanently lost (even when, for example, temperature information already indicates that the bird is dead), there is a short period where charge lowers, presumably due to the tag resting in a position where limited sunlight is reaching the solar panels. Longer periods of declining charge could, however, indicate technical failure of the tag. To account for the cases where lost contact is most likely to be indicative of tag technical failure, we therefore repeated the analysis excluding those instances where no further locations were received after a prolonged period of declining tag charge and there was no evidence in support of mortality before this. This resulted in the exclusion of two cases (one West and one East) from the stage up to completion of the Sahara crossing and one case (East) for the stage between completion of the Sahara crossing and arrival at the minimum latitude stopover/position occupied at midwinter. Remaining cases were those where there was either evidence that the bird had died or when contact was lost after no or only a short period of low or declining battery charge. Migration metrics. Southwards migration routes and stopover areas were identified using all transmission either until transmissions stopped or until each bird reached its most southerly stopover (see below for definition of stopover), which was usually also the position occupied at mid-winter. After excluding journeys by five birds, which were lost to tracking before they reached southern Europe, 56 journeys by all birds were classified into either West or East migrating groups based . The modelled effect of migration route is shown by the solid line and its 95% confidence intervals by the dotted lines. The relationship was highly significant for both data sets (events/trials logistic regression; Atlas: df 1, w 2 22.7, Po0.0001; BBS: df 1, w 2 12.0, P ¼ 0.0005). When controlling for the upland/lowland habitat division (see Methods for definition), the relationship remained highly significant for the Atlas data set events/trials logistic regression; (df 1, w 2 8.7, Po0.0031) but not for the BBS data set (events/trials logistic regression; df 1, w 2 0.0, P ¼ 0.842). Atlas population change data are from surveys of 39,936 2 km 2 in 1998-91 and 45,374 in 2007-11. BBS data set population change was based on surveys of up to 3,018 1 km 2 each year between 1994-96 and 2007-11, requiring interpolation to give national coverage, and is therefore less robust for assessing local population changes. Habitat classification, migration route and population change data for each tagging location are shown in Supplementary Table 2. on whether they left Europe via either Iberia or from Italy and eastwards. Individuals that were lost to tracking in southern Europe were classified according to the longitude of their final stopover location, as at this latitude the distributions of East and West birds that were tracked into Africa were discontinuous with the exception of one outlying bird and none of the birds classified by this method was close to the overlap. Stopovers were defined as periods when best locations from two or more consecutive scheduled transmission periods were within 50 km of each other, meaning the minimum duration of stopovers considered is 3 days. Stopovers of this duration could be missed if the best locations from the relevant duty cycles were away from the stopover location, or if there were no reliable locations from one duty cycle. Stopover duration was calculated by assuming that a bird arrived on the first day and departed on the last day that it was detected at a stopover location and hence are minima. Tags often showed gradually depleting battery charge during a stopover but in most cases charged and began transmitting on exposure to sunlight when a bird moved to a new location. Exceptions were therefore made when no locations were received for one or more transmission cycles before a bird was detected at a new location, in which case the bird was assumed to have remained at the previous stopover location until the day the last missed location was expected. Key migration events were defined as follows: departure from the breeding grounds was assumed to have occurred on the day before the first location more than 50 km away from it. Completion of the Sahara crossing was the day of the first location in the first stopover south of the Sahara. Completion of migration was the first day at the stopover with minimum latitude. Migration distance was the total distance moved between best locations per cycle, excluding movements within stopovers (that is, those o50 km), from departure until the completion of migration. Duration of migration was the number of days between the assumed day of departure and the first location at the minimum latitude stopover, which, in the majority of cases, was also the location occupied at midwinter. Migration speed was the total distance travelled divided by the duration. Number of stopovers was the number of stopover sites occupied before arrival at the most southerly location. Migration metrics were calculated: (a) to arrival in the first stopover south of the Sahara, the major barrier crossed during migration; and (b) to arrival at the most southerly stopover area occupied, for all individuals that completed the relevant migrations. The direction of the Sahara crossing was calculated (a) by taking the average of the bearings from the previous best of duty cycle position to all locations during active migration, which fell in Africa north of 15°N, or to the most northerly position below 15°N if there were none to the north, excluding positions in the Atlas mountains and the adjacent coast as suitable stopover locations exist there and (b) by calculating the bearing between the final pre-and first post-Sahara crossing stopover. Statistical analyses. Analyses were undertaken using GLMMs in Proc Glimmix in SAS 9.2. Relationships between routes in migration metrics and apparent survival were modelled as a function of migration direction using Gaussian and binomial response distributions, respectively. Year and individual*region ('region' being tagging area) were initially included as random factors to account for dependencies with year and tagging area, and pseudo-replication due to the inclusion of repeated journeys from the same individual. Random factors were removed if their covariance parameter was zero. The LSMeans function was used to output estimates of each variable for each route. Differences in key stopover and wintering locations were additionally modelled using average latitude, average longitude and latitude*longitude as predictor variables with route as a binomial response variable. Results presented are marginal effects for the model with lowest pseudo-akaike information criteria (AIC). Models for the first stopover after the Sahara crossing failed to converge so the latitude and longitude for this stopover were tested separately. Local breeding population change around each tagging location was calculated by buffering all 10 km UK national grid squares in which a cuckoo was trapped for tagging by two 10 km 2 (that is, producing a minimum of 50 km 2 centred on a 10 km in which a cuckoo was tagged) and averaging population changes across all selected 10 km 2 for each of the nine tagging localities. Indexes of population change for each 10 km 2 were available from two completely independent data sets: the Bird Atlas 2007-11 (ref. 15) and the BTO/RSPB/JNCC BBS 16 . For the Bird Atlas, change in abundance was the standardized arithmetic difference between the abundance in 1988-91 and 2008-11. Abundance was calculated as the proportion of surveyed 2 km Atlas squares (39,936 were surveyed in 1988-91 and 45,374 in 2007-11 across Britain and Ireland) falling in each 10 km that were occupied by cuckoos 15 . For BBS, abundance for each 1 km 2 was produced by modelling counts from surveyed 1 km 2 for 1994-96 and 2007-09 (between 1,501 and 3,018 survey squares each year) with respect to landcover, northing, easting, elevation and including a smoothing term in a GAM 16 . The results were them condensed to give abundance values at the 10 km 2 level and change was calculated as the estimated abundance index for 2007-09 minus the estimated abundance index in 1994-96. The two population change data sets are completely independent. The change index from the Atlas is, however, far more robust and spatially precise than the change from BBS, because it is based on an order-of-magnitude greater number of data points, with no interpolation or spatial smoothing such as used in the BBS data set, with more field survey effort underlying each data point. As such, it is far more appropriate to determine local population changes. Relationships between local breeding population change and migration direction were then modelled using events/trials logistics regressions with each tagging locality as a case, number migrating east as the numerator and total number tagged as the denominator on the dependent variable and average breeding population change index as the explanatory variable. Each bird is therefore represented as a single migration direction, irrespective of how many journeys it was tracked over. In the single instance in which a bird used different routes on consecutive journeys (see Supplementary Fig. 1) a classification of 0.5 was given to each route. A habitat variable classifying each tagging locality as upland or lowland was subsequently added to test whether relationships remained when controlling for this division. The altitudinal distribution of vegetation types differs greatly across the tagging locations due to geographical climatic variation; thus, upland was defined as areas in which the surrounding landscapes were predominantly mires, wet heaths and acid grassland. Data availability. The authors declare that the data supporting the findings of this study are available in Supplementary Tables 1-6.
2017-11-29T10:45:36.962Z
2016-07-19T00:00:00.000
{ "year": 2016, "sha1": "4977cdba1ddb69b541aa50d4e1f439dab34100be", "oa_license": "CCBYNCSA", "oa_url": "https://www.nature.com/articles/ncomms12296.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "2a2d6d4ae561d26bb7c30708a24fb531c97db361", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Geography" ] }
255600902
pes2o/s2orc
v3-fos-license
Case report: Amnestic mild cognitive impairment in multiple domains associated with neurofascin 186 autoantibodies: Case series with follow-up and review Background Neurofascin 186 autoantibodies are known to occur with a diseased peripheral nervous system. Recently, also additional central nervous system (CNS) involvement has been reported in conjunction with neurofascin 186 autoantibodies. Our case enlarges the spectrum of neurofascin 186 antibody-related disease to include mild cognitive impairment (MCI). Methods We report here a case after having examined the patient files retrospectively, including diagnostics such as blood and cerebrospinal fluid (CSF) analysis involving the determination of neural autoantibodies, brain magnetic resonance imaging (MRI), brain fluorodesoxyglucose positron emission tomography (FDG-PET), and extensive neuropsychological testing. Results We report on two patients with MCI. Brain MRI showed cerebral microangiopathy in both patients, but brain FDG-PET demonstrated pathology in the right prefrontal cortex, in the right inferior parietal cortex, and in both lateral occipital cortices in one patient. Neurofascin 186 antibodies were detected in serum in both patients, and neurofascin 186 autoantibodies were also detected in the CSF of one of these patients. At follow-up six month later, neurofascin 186 autoantibodies disappeared in one patient while persisting in the other. Conclusion We report on two individuals presenting MCI associated with neurofascin 186 antibodies, thus expanding the potential spectrum of neurofascin 186-associated disease. This report supports the recommendation to consider also neurofascin 186 autoantibodies in not just peripheral nerve disease, but also in disorders involving CNS autoimmunity. More studies are needed to clarify the lack of association between neurofascin 186 autoantibodies and cognitive decline. Introduction The spectrum of the diseases associated with neurofascin 186 antibodies is limited mainly to peripheral neuropathy (1) such as chronic demyelinating inflammatory polyneuropathy (2) as well as subacute nodopathy (3) and to amyotrophic lateral sclerosis (4). Recently, also central nervous system (CNS) involvement in neurofascin 186 autoantibody-associated peripheral neuropathy has been demonstrated (5). However, cognitive impairment has been rarely reported in association with neurofascin 186 antibodies (5), and there might be structures in the CNS involved in primary neuroinflammation in the peripheral nervous system. Here we report two patients predominantly presenting with cognitive impairment as a clinical phenotype in which primary neuroinflammatory locus is probably not the peripheral nervous system. Our report thus highlights the novelty of a neurofascin 186 autoantibody-related affectation of the CNS through a possible inflammatory process associated with cognitive impairment. Case reports 2.1. Case 1 A 75-year-old woman presented complaining of short-term memory disturbances, word-finding difficulties and depressive symptoms starting about a year earlier. She is a multimorbid patient with high care needs. Her comorbidities comprised essential tremor, lung empyema, steatosis hepatis with multiple liver cysts, cholecystolithiasis, coronary heart disease, arterial hypertension, mitral valve insufficiency, hyperlipoproteinemia, hypothyreosis, and polyneuropathy. She also has a knee total endoprothesis on the left and coxathrosis on the right. She also has about 60 pack years involving nicotine abuse, but has probably been abstinent since 2019. Her mother suffered from dementia. She has been categorized as care level two and is about 70% disabled (i.e., severely disabled). Her daughter serves as a complete health care proxy for her. She acquired a secondary school level 1 certificate and worked as a telephone assistant in telecommunications. Concurrent mediation comprised the following: pantoprazole 20 mg/d, atorvastatin 20 mg/d, L-thyroxin 50 mikrog/d, fexofenadin 180 mg/d and bupropion 150 mg/d. Psychopathological examination revealed a loss of drive and a slowed psychomotor speed. Furthermore, she was also suffering from depression (ICD-10: F33.1: recurrent major depression, moderate, Geriatric Depression Scale (GDS) score: 6, i.e., depressive symptoms of mild to moderate severity). Neurological examination demonstrated pallanesthesia in her legs. At cognitive screening, she scored 29 of 30 points on the Mini Mental Status Examination (MMST), but neuropsychological testing revealed an amnestic mild cognitive impairment (MCI) with deficits in information processing speed (attention), visuospatial cognition, verbal and figural memory (Figure 1). Together with cognitive deterioration in multiple domains, a caregiver rating (Bayer Activities of Daily Living Scale, B-ADL) resulted in mild impairments of ADL competence (mean B-ADL: 3.3). Magnetic resonance imaging (MRI) revealed cerebral microangiopathy with Fazekas grade 1. Cerebrospinal fluid (CSF) analysis showed elevated S100 protein (4.7 µg/l, pathological if > 2.7 µg/l) ( Table 1) and ptau 181 protein ( Table 1).We also identified intrathecal IgG synthesis ( Table 1). We also detected anti-neurofascin 186 autoantibodies in serum at 1:32 intensity via anti-neural antigen IgG immunofluorescence testing with BIOCHIP-mosaic with brain tissue and recombinant cells. At follow-up six months later, no serum neurofascin 186 autoantibodies were detectable. Her familiar MCI was evident over the course, but dementia-like syndrome was not. Case 2 A 66-year-old man presented complaining of memory problems for the past two years. His medical history included nicotine dependency, hyperlipidemia, presbyacusis and post prostate cancer. Psychopathology was indicative of recurrent major depression, moderate (ICD-10: F33.1, Beck Depression inventory (BDI-II) score 31, i.e., depressive symptoms of severe severity). He is married, living with his wife, and has a daughter. He worked as a professional mason. He completed eight years of school and has a secondary school diploma, and has been retired for 3 years. His mother has severe dementia at an age of 87 years and requires constant health supervision. His father died of a myocardial infarction at the age of 46 years. At the time-point of neuropsychological testing, he reported Cognitive screening resulted in mild cognitive impairment (MMST 26/30). At neuropsychological testing, mild difficulties were detected in confrontation naming (language), and more severe cognitive deficits became apparent in information processing speed (attention), phonematic word fluency (language/executive functions), cognitive flexibility (executive functions), visuospatial cognition, working memory, visual memory and partly in verbal memory (Figure 1). His cognitive performance profile was classified as MCI in multiple domains together with mildly reduced ADL-competency (B-ADL: 4.4). His MRI revealed cerebral microangiopathy, but his brain fluorodesoxyglucose positron emission tomography (FDG-PET) yielded pathological Z-scores in the right prefrontal cortex, in the parietal inferior cortex on the right side, lateral occipital cortex on both sides, visual cortex on both sides, and temporal lateral cortex on the right side. Anti-neurofascin 186 autoantibodies were detected in his serum (1: 320) and CSF 1: 32 via anti-neural antigen IgG immunofluorescence testing with BIOCHIP-mosaic with brain tissue and recombinant cells. The neurofascin 186 autoantibodies were still present six months later (1:32). At follow-up he showed speech anomalies primarily entailing a stutter. However, he claims to have already stuttered when young, so that it is not clear whether this should be interpreted as a speech disorder or reactivation. The cognitive disturbances did not appear to be significantly progressive in his follow-up examination. He also denies suffering from hypomimia, vigilance fluctuations, REM sleep disturbances, and hallucinations. Discussion Our main finding here is the novelty of CNS involvement in neurofascin 186 antibody-associated autoimmunity over Neuropsychiatric disease References Amyotrophic lateral sclerosis Anti-pan neurofascin-associated neuropathy a follow-up of six months in two paradigmatic patients. Neurofascin 186 interacts with Neuropilin-1, which mediates axon guidance and adhesion during the formation of gamma amino butyric (GABA)ergic synapses in the cerebellum (6). Neurofascin 186 antibodies might have an impact on the function of GABAergic synapses in the cerebellum. Dysfunctional cerebellar GABAergic synapses might in turn affect cognitive functions via functional and anatomical connections between the cerebellum and hippocampus (7) as a potential mechanism of action of how neurofascin 186 antibodies might act. Another mechanism of action is based on axonal pathology with complement deposition induced by neurofascin 155 and 186 antibodies, which selectively target the nodes of Ranvier in multiple sclerosis (8). Other reports support the role of neurofascin autoantibodies in demyelinating diseases such as multiple sclerosis (9). Neurofascin antibodies appear to be much more common in primary progressive multiple sclerosis than in relapsing-remitting multiple sclerosis (10) ( Table 1). These studies suggest that axonal pathology associated with neurofascin 186 autoantibodies may contribute to the progressive course. In one of our patients, we also detected elevated ptau181, suggesting axonal brain damage. Considering the mechanistic studies of how neurofascin 186 autoantibodies might contribute to axonal brain pathology, the ptau181 elevation in one patient can be partially explained. However, axonal pathology in our cases was not caused by multiple sclerosis. The neuroaxonal CNS damage in strategically cognitively relevant areas may contribute to cognitive dysfunction. Mild cognitive dysfunction could also coincide with multiple sclerosis, but further progression to dementia would be entirely atypical for multiple sclerosis. However, the exact mechanism of cognitive dysfunction in association with neurofascin 186 autoantibodies remains unclear in our patients, especially when considering the Bradford-Hill criteria (11). In our patient 2, it seems unlikely that dysimmune neuropathy and a combined central and peripheral demyelinating syndrome cause the neurofascin 186associated cognitive dysfunction because of increased brain injury proteins and evidence of intrathecal IgG synthesis suggesting a central inflammatory process. Note that FDG-PET results demonstrate hypometabolism of the frontal, temporal, and parietal lobes in the second patient. Frontotemporal degeneration might therefore be the probable cause of cognitive dysfunction in patient 2, although clinical features for FTD behavioral variant or primary progressive aphasia were not met. In addition, the cognitive impairment began at age 64 years, suggesting possible disease from frontotemporal lobar degeneration. However, the clinical and neuropsychological profiles do not suggest FTD in the second patient. Our follow-up investigations showed persistent neurofascin 186 autoimmunity only in one of the two patients; a peripheral nervous system affectation is also clinically conceivable with persisting neurofascin 186 autoantibodies. Although not formally retested, cognitive impairment was still obvious at follow-up and in both cases. We do not finally know whether autoantibodies against neurofascin 186 play a causal role in cognitive impairment in these two patients. Additionally, both patients suffered from major depression and were positive for cardiovascular risk factors, the latter most probably having caused cerebral microangiopathy found in both MRIs. This might in turn have contributed -at least in partsto the observed cognitive impairment. Moreover and most interestingly, both patients had a positive family history for dementia. As patient 1 suffered peripheral nerve damage, the presence of neurofascin 186 antibodies should be regarded in conjunction with her peripheral nerve system affectation while considering the main manifestations of neurofascin 186-related disease described so far ( Limitations The main limitation of our case report is that we had no neuropsychological follow-up in either patient. In addition, the origin of MCI in both patients could also be influenced by vascular pathology, as evidenced by cerebral microangiopathy on MRI and the neuropsychological profile showing impairments in several cognitive domains. However, the mild degree of microangiopathy argue against a pronounced vascular pathology as a cause of MCI in these patients. In addition we cannot completely rule out that neurofascin 186 antibodies were false positive in the first case, as these were not replicated six months later. However, the clinically evident severe cognitive impairment and circumstantial evidence for neuroaxonal cell damage in the brain as well as intrathecal IgG synthesis support the possible role of these neurofascin 186 autoantibodies according to the recently published criteria for autoimmune-based psychiatric syndromes (12). It would be beneficial to measure ptau181 levels in a future study to see whether tau levels normalize over the time course of neurofascin 186-associated cognitive impairment, which we would expect to be the case in autoimmunemediated cognitive impairment. In the second case, the repeated findings of neurofascin 186 antibodies in addition to brain abnormalities also argue against a false-positive effect of the autoantibody results. Another point worth mentioning is that no patient so far underwent immunotherapy as an individual drug trial, so the we cannot assess any benefit of immunotherapy which would have delivered potential evidence of a possible link between the cognitive impairment and an autoimmune basis of the neurofascin 186 autoantibodies. Conclusion Our results demonstrate that neurofascin 186 autoantibodies associated with amnestic MCI occur in multiple domains that should be considered in the differential diagnosis for mild cognitive impairment. The strength of this case series is the careful differential diagnosis with a large panel of neural autoantibodies, especially neurofascin 186 autoantibodies, and the longitudinal follow-up in both patients with MCI. Follow-up in both patients with stable cognitive impairment is also suggestive of an autoimmune-mediated time course and reveals no obviously progressive clinical course as in neurodegenerative diseases. These patients have not undergone immunotherapy because of the lack of evidence. More research is needed to investigate the presence of these autoantibodies in association with cognitive impairment in a larger homogeneous cohort without peripheral nervous system involvement and to include comprehensive clinical follow-up to better assess the clinical significance of autoantibody findings. Data availability statement The raw data supporting the conclusions of this article will be made available by the corresponding author, without undue reservation. Ethics statement Ethical approval is given for this study. The patients provided their written informed consent to participate in the study. Written informed consent was obtained from the individuals for the publication of any potentially identifiable images or data included in this study. Author contributions NH and BC wrote the manuscript. All authors revised the manuscript for important intellectual content. Funding JW was supported by an Ilídio Pinho professorship, iBiMED (UIDB/04501/2020) at the University of Aveiro, Portugal. This study was funded by the Open Access fund of the University of Göttingen.
2023-01-12T15:19:49.449Z
2023-01-12T00:00:00.000
{ "year": 2022, "sha1": "2e107babb1f9984455cf2045c08c471491e85e37", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "2e107babb1f9984455cf2045c08c471491e85e37", "s2fieldsofstudy": [ "Medicine", "Biology", "Psychology" ], "extfieldsofstudy": [] }
259125995
pes2o/s2orc
v3-fos-license
Flexural Strength Characteristics of Fiber-Reinforced Cemented Soil This work deals with the flexural performance of a soil-cement for pavement reinforced by polypropylene and steel fibers, and the main purpose is to evaluate the effect of different curing times. In this sense, three different curing times were employed to investigate the influence of fibers on the material’s behavior at varying levels of strength and stiffness as the matrix became increasingly rigid. An experimental program was developed to analyze the effects of incorporating different fibers in a cemented matrix for pavement applications. Polypropylene and steel fibers were used at 0.5/1.0/1.5% fractions by volume for three different curing times (3/7/28 days) to assess the fiber effect in the cemented soil (CS) matrices throughout time. An evaluation of the material performance was carried out using the 4-Point Flexural Test. The results show that steel fibers with 1.0% content improved by approximately 20% in terms of initial strength and peak strength at small deflections without interfering the flexural static modulus of the material. The polypropylene fiber mixtures had better performance in terms of ductility index reaching values varying from 50 to 120, an increase of approximately 40% in residual strength, and improved cracking control at large deflections. The current study shows that fibers significantly affect the mechanical performance of CSF. Thus, the overall performance presented in this study is useful for selecting the most suitable fiber type corresponding to the different mechanisms as a function of curing time. Introduction With a lack of good-quality natural resources and the increasing cost of construction materials, sustainable techniques and materials that adapt to the location have been increasingly incorporated into highway and airway pavement projects. In the past decades, chemical soil stabilization has been widely used in pavement structure. This technique combines the use of local soil with cement to provide a strong material, which, when used in layers of the pavement, can support traffic loads [1][2][3][4][5][6]. For example, Consoli et al. [3] quantify the impact of cement quantity, porosity, and voids/cement ratio on the assessment of split tensile strength in sandy soil when reinforced and non-reinforced with fibers. A comprehensive series of split tensile tests were conducted to investigate this relationship. In summary, the results highlight the influence of cement quantity, porosity, and voids/cement ratio on split tensile strength in both fiber-reinforced and non-reinforced cemented soil. The findings demonstrate the positive effect of fiber reinforcement on split tensile strength and provide valuable insights into the relationships between these parameters for the effective evaluation of the studied soil mixture. Even though the use of local soil with cement provides a strong material, CS presents brittle behavior at failure, which means that, once the yield stress is exceeded, the material suddenly loses all or most of its initial support capacity. Furthermore, during their useful with a low modulus of elasticity (ductile). These materials contribute to soil reinforcement and offer operational properties in post-critical stress areas. This group includes synthetic fibers such as polypropylene (PP), polyester (most commonly PET), polyethylene (PE), glass fibers, nylon fibers, and polyvinyl alcohol (PVA) fibers. The advancement of plastic production technology has increased the interest in plastic-based fibers within the modern industry. These synthetic fibers provide additional options for soil reinforcement due to their various properties and characteristics. Overall, Hejazi et al.'s [18] review underscores the importance of understanding and optimizing the properties of fibers used in reinforced soils, as well as differentiating between high modulus of elasticity materials for soil strength and low modulus of elasticity materials for post-critical stress properties. The inclusion of synthetic fibers has expanded the range of options available for soil reinforcement in contemporary applications. In practical applications, steel and synthetic fibers such as polypropylene (PP) or nylon fibers are commonly adopted for reinforcing cement-treated soils. Each fiber type offers its advantages and disadvantages when used for reinforcing cemented soil. Using steel fibers improves the tensile strength of cemented soil elements, as well as ductility, durability, and fatigue resistance. However, they have a disadvantage regarding corrosion susceptibility and higher cost than PP fibers. The PP fibers have the advantage of crack propagation control, being cost-effective, and being lightweight and easily blendable with cement, ensuring a uniform distribution throughout the soil mixture. However, PP fibers have lower tensile strength than steel fibers and are more sensitive to temperature [16]. Including fibers in soil or soil-cement is a reinforcement method still incipient in the pavement area [7,19]. This technique is already used in cemented materials to control the cracks and prevent the material from a fragile and catastrophic failure [6,12,[20][21][22]. Most of these studies conducted laboratory tests using flexible synthetic fibers [6,7,12,[22][23][24][25][26] and steel fibers [7,12,23,26,27]. They reveal that the inclusion of the two fibers leads to significant improvements in the tensile and flexural strength of the cemented materials, increasing the toughness and the ability of the material to resist stresses even after cracking because the fibers serve as "bridges" mobilizing a wider mass of soil-cement, allowing a better redistribution of the tensions in the matrix [13,28]. In terms of the mechanical performance of fibrous composites, most studies focus on unconfined compression [24,27,[29][30][31][32] and splitting tensile strength tests [3,8,[33][34][35] due to the availability and familiarity with the apparatus. However, these tests do not reflect the stresses on the pavement [7,12,19]. In fact, pavement structures are subject to flexural loads that cause tensile stress and crack at the bottom of the cemented base layer [35][36][37]. Therefore, for a better characterization of cemented materials in the pavement, recent studies [36,38,39] have used the flexural test in mechanical performance due to its similarity with the stress condition produced in the field. Despite this, the evaluation by the static flexural test using this standard offers an alternative fiber dosage for the material. Previous studies [6,7,12,21,27,38,40] recommend a fiber content that gives the material a deflection-hardening behavior, that is, after crack, it must support and even increase its strength, indicating that the fibers are sufficient to transfer loads of the same magnitude even after the crack, with a prevalence of multiple and small cracks instead of one big crack in the material. Furthermore, comparing fibers is essential to select the most suitable fiber according to the design purpose. In flexural performance, a fiber that provides a deflection-hardening response in lower fiber content is considered more cost-effective and likely to be used in engineering applications [7,27,38,40]. Sukontasukkul and Jamsawang [12] compared the flexural performance of a soil-cement matrix at 20% of cement content with PP and steel fiber at 0.5-1.0% of fiber content. They concluded that the PP fiber contributed more significantly to post-peak strength, ductility, and toughness than steel fibers. Although a high cement content was used, the flexural strength reached values between 0.1 and 0.2 MPa, representing a poor bond between the soil and the cement particles in the matrix. In this case, deflection-hardening behavior was not achieved. Most parts of previous studies focused on using only one type of fiber and finding the best fiber content for that specific matrix. The few papers that approach the two groups of fibers do not analyze the issue concerning that different fiber natures have different contributions due to the matrix's changing strength and stiffness. Therefore, this research aims to compare the flexural performance of a soil-cement for pavement reinforced by polypropylene and steel fibers considering the action of different curing times. In this sense, three different curing times will be employed to investigate the influence of fibers on the material's behavior at varying levels of strength and stiffness as the matrix becomes increasingly rigid. In this way, it is possible to identify which type and fiber content are the most suitable for the CS matrix and how the curing time affects the efficacy of the contribution of each fiber. The use of curing time as a factor is a novelty in pavement areas and aims to identify the fiber contribution along time, because it is believed that the results are more related to strength and stiffness reached by the matrix than the cement content utilized. Materials and Methods This section presents the experimental procedures and material utilized to investigate the flexural performance of a soil-cement for pavement reinforced by polypropylene and steel fibers. Test Materials A typical clayed sand soil (SC) classified as A2-4 according to the USCS and the HBR classification system from Brazil was utilized in this study. This soil is typically used for infrastructure construction in São Paulo, Brazil, and Table 1 provides the main geotechnical properties of the soil. Ordinary Portland cement with the addition of slag classified as CP II E-32 by the Brazilian standard, corresponding to type I cement in ASTM C150/C159M-22 [41], was used as the curing agent. According to NBR 11578, this type of cement can have 6 to 34% slag in its composition, and its compression strength at 28 days must be greater than or equal to 32 MPa. The CS dosage was carried out following the Brazilian standard NBR 12253 [42]. The 6% cement by (dry weight of soil) was considered optimum and used to prepare all specimens for the current study. In this study, 20 mm polypropylene and 33 mm steel fibers were used as reinforcement elements. These fibers are commercially available in large quantities in Brazil and can be used in future field tests or engineering projects. Furthermore, previous studies with similar PP fibers [3,43] and steel fibers [7,12] showed satisfactory results. Figure 1 shows the appearance of each fiber, and Table 2 summarizes their main physical and mechanical properties. Sample Preparation The clayed sand was dried using a convective oven for 24 h to obtain a minimal initial moisture content (varying from 0.2 to 0.5%) two days before mixing with the cement and compaction. The amount of 6% of cement in dry weight was incorporated into the soil and mixed for 3 min. Next, water was added to the CS and homogenized according to the optimum moisture content of 8.5%, as determined by intermediate Proctor tests, for an additional 3 min. Finally, the PP or steel fibers were manually mixed with the CS, a procedure recommended by Consoli et al. [3]. The amount of fiber used refers to the dry weight of soil plus cement. A preliminary separation procedure was carried out before the mixing process of the PP fibers to avoid the aggregation problem. A similar process was used by Cristelo et al. [32], and it consisted of injecting compressed air into a bag containing the original fibers for 5 min. The testing apparatus consists of a rectangular beam measuring 100 mm wide, 100 mm high, and 350 mm long. Each sample was dynamically compacted in three layers using a manual hammer with a rectangular base of 50 mm × 50 mm and a mass of 2.5 kg dropped from a height of 30.5 cm with 130 blows per layer. Each layer was scarified after compaction to improve the bond between layers. One day after the compaction process, the specimens were demolded and then wrapped in a plastic sheet to avoid moisture variation and were then cured inside a controlled room with a temperature of 23 °C ± 2 °C until the target curing time (3, 7 or 28 days). The samples were considered acceptable for testing when they met the following criteria: dry density values between 99% and 101% determined by the intermediate Proctor test, moisture content varying ±0.5% from the optimum value obtained by the intermediate Proctor test, diameter tolerance between ±0.5 mm, and height tolerance between ±1.0 mm. Sample Preparation The clayed sand was dried using a convective oven for 24 h to obtain a minimal initial moisture content (varying from 0.2 to 0.5%) two days before mixing with the cement and compaction. The amount of 6% of cement in dry weight was incorporated into the soil and mixed for 3 min. Next, water was added to the CS and homogenized according to the optimum moisture content of 8.5%, as determined by intermediate Proctor tests, for an additional 3 min. Finally, the PP or steel fibers were manually mixed with the CS, a procedure recommended by Consoli et al. [3]. The amount of fiber used refers to the dry weight of soil plus cement. A preliminary separation procedure was carried out before the mixing process of the PP fibers to avoid the aggregation problem. A similar process was used by Cristelo et al. [32], and it consisted of injecting compressed air into a bag containing the original fibers for 5 min. The testing apparatus consists of a rectangular beam measuring 100 mm wide, 100 mm high, and 350 mm long. Each sample was dynamically compacted in three layers using a manual hammer with a rectangular base of 50 mm × 50 mm and a mass of 2.5 kg dropped from a height of 30.5 cm with 130 blows per layer. Each layer was scarified after compaction to improve the bond between layers. One day after the compaction process, the specimens were demolded and then wrapped in a plastic sheet to avoid moisture variation and were then cured inside a controlled room with a temperature of 23 • C ± 2 • C until the target curing time (3, 7 or 28 days). The samples were considered acceptable for testing when they met the following criteria: dry density values between 99% and 101% determined by the intermediate Proctor test, moisture content varying ±0.5% from the optimum value obtained by the intermediate Proctor test, diameter tolerance between ±0.5 mm, and height tolerance between ±1.0 mm. Experimental Program The experimental program was divided into two parts. The first consisted of studying Proctor compaction at the intermediate energy of CSF to understand the effect of including fibers in compaction. In the second part, static flexural tests were carried out to evaluate the influence of the type of fiber, fiber content, and curing time on the mechanical behavior of the material. The results were obtained as the average value of three specimens. Therefore, the study variables of interest are: • Type of fiber: since different natures of fiber can offer different contributions to mechanical behavior, this study focuses on the influence of fiber (polypropylene or steel fiber). • Fiber content: It is essential to use different fiber amounts to check trends caused by the fiber content increase in the matrix. Therefore, to evaluate the fiber effect on flexural behavior, three percentages of fiber contents were used in the experiment: 0.5%, 1.0%, and 1.5%. • Curing time: three curing times were used (3, 7, and 28 days) to find different trends in how each fiber influences strength and stiffness within the evolution of the matrix becoming stronger and more rigid. Four-Point Static Flexural Test The specimens were submitted to the 4-point flexural test after the curing time. The test was performed according to the ASTM C1609/C1609-12 [44] in an Instron ® (Norwood, MA, USA) universal testing machine. The beams were centralized and aligned parallel to the rollers sitting support and the LVDT's support frame. An external load cell with a capacity of 10 kN was used to measure the loading, while an LVDT was positioned in the center of the support to measure the deflections in the middle of the tested beam. The static flexural tests were carried out in a displacement-controlled mode, in which a deflection rate of 0.075 mm/min was used, as recommended by the standard, until deflection reached 3 mm. The geometry of the test specimen and the test setup for the flexural test are shown in Figure 2. Experimental Program The experimental program was divided into two parts. The first consisted of studying Proctor compaction at the intermediate energy of CSF to understand the effect of including fibers in compaction. In the second part, static flexural tests were carried out to evaluate the influence of the type of fiber, fiber content, and curing time on the mechanical behavior of the material. The results were obtained as the average value of three specimens. Therefore, the study variables of interest are: • Type of fiber: since different natures of fiber can offer different contributions to mechanical behavior, this study focuses on the influence of fiber (polypropylene or steel fiber). • Fiber content: It is essential to use different fiber amounts to check trends caused by the fiber content increase in the matrix. Therefore, to evaluate the fiber effect on flexural behavior, three percentages of fiber contents were used in the experiment: 0.5%, 1.0%, and 1.5%. • Curing time: three curing times were used (3, 7, and 28 days) to find different trends in how each fiber influences strength and stiffness within the evolution of the matrix becoming stronger and more rigid. Four-Point Static Flexural Test The specimens were submitted to the 4-point flexural test after the curing time. The test was performed according to the ASTM C1609/C1609-12 [44] in an Instron ® (Norwood, MA, USA) universal testing machine. The beams were centralized and aligned parallel to the rollers sitting support and the LVDT's support frame. An external load cell with a capacity of 10 kN was used to measure the loading, while an LVDT was positioned in the center of the support to measure the deflections in the middle of the tested beam. The static flexural tests were carried out in a displacement-controlled mode, in which a deflection rate of 0.075 mm/min was used, as recommended by the standard, until deflection reached 3 mm. The geometry of the test specimen and the test setup for the flexural test are shown in Figure 2. Regarding mechanical behavior, each tested material has a specific behavior concerning the load-deflection curve, which is governed by the fiber content in its matrix. The CS without fiber presents brittle behavior, the flexural strength drops to zero after the first cracking, and the fibrous composite can exhibit two distinct behaviors: deflection-hardening or deflection-softening [6,40]. The first is characterized by an increase in the load capacity after the first rupture and multiple cracks, while the second is identified by a drastic drop (single crack) in the load that can be followed by either a second increase or a continued decrease in strength [6,40]. Regarding mechanical behavior, each tested material has a specific behavior concerning the load-deflection curve, which is governed by the fiber content in its matrix. The CS without fiber presents brittle behavior, the flexural strength drops to zero after the first cracking, and the fibrous composite can exhibit two distinct behaviors: deflectionhardening or deflection-softening [6,40]. The first is characterized by an increase in the load capacity after the first rupture and multiple cracks, while the second is identified by a drastic drop (single crack) in the load that can be followed by either a second increase or a continued decrease in strength [6,40]. The first point of analysis is the first cracking point, which is defined as the point that represents the transition from linearity to non-linearity behavior of a load-deflection curve. This point is known as the limit of proportionality (LOP). The next point is where the maximum load occurs after the LOP. This point is known as the modulus of rupture (MOR) and corresponds to the maximum load supported by the pull-out force of the fibers after the initial crack. Note that the LOP and MOR are coincident points for composites without fiber. Equation (1) was used to calculate the flexural stress, and the maximum load between LOP and MOR point was used to evaluate the peak strength of the specimen. where P i is the maximum applied load (N), L is the span length (mm), b is the width of the specimen (mm), and d is the depth of the specimen (mm). Standard ASTM C1609-12 [44] recommends L/600 (corresponding to a net deflection equal to 1/600 of the span, 0.5 mm) and L/150 points to be evaluated in flexural behavior; L/300 point was additionally added to this study for a complete analysis. The load applied after these points corresponds to the residual strength. For cemented base materials with or without fibers, stiffness is a key parameter in developing mechanistic analysis in the pavement. Therefore, to evaluate fiber effect on material stiffness and future stress-strain analysis, the static flexural modulus was determined from the stress-deflection relationships (secant modulus corresponding to 50% of MOR) by Equation (2). where S m is the static flexural modulus (MPa), P is the load (N) for the corresponding deflection δ (mm), ν is the Poisson's ratio of the stabilized material and L is the length between supporting rollers; w and h are the average width and height (mm), respectively. The value of ν was assumed to be 0.2. In addition to flexural strength and the static flexural modulus, other parameters are commonly used to describe the flexural behavior of composites. In this study, the ductility index and toughness were evaluated to compare the flexural performance of the CSF. According to Jamsawang et al. [7], the ductility index of a composite material is the ratio between the deflection at the modulus of rupture (MOR) and the first crack deflection (LOP). The higher the ratio, the more ductile the fibrous composite. Therefore, the ductility index (DI) can be defined from Equation (3). Toughness is the energy absorption related to the area under the load-deflection curves at a specific point. According to Khattak et al. [2] and Sobhan et al. [45], higher energy absorption materials have higher fatigue failure strength, which is desired in pavement applications. In this study, the toughness was calculated using the area below the loaddeflection curve until the determined deflection point (0.5, 1.0 and 2.0 mm). Dry Unit Weight and Water Content The compaction for the flexural tests of the CS and CSF mixtures was performed on Proctor's intermediate energy. The influence of fiber and fiber content on dry density and water content is shown in Figure 3. Firstly, there were decreases of 0.04%, 2.03%, and 3.90% in the dry unit weight for the mixtures with PP fiber at the contents of 0.5, 1.0, and 1.5%, respectively. According to Onyejekwe & Ghataora [23] and Tran et al. [34], this tendency of higher PP fiber content resulting in dry unit weight reduction can be explained by the fact that flexible fibers absorb a part of the compaction energy. The steel fiber mixtures had an increase of 1.61, 3.49, and 4.51% in dry unit weight for samples with fiber contents of 0.5, 1.0, and 1.5%, respectively. Although there is no research focusing on the steel fiber effect in mixtures with soil, these increases were expected since the volume of steel in the mix corresponds to a much larger mass than if the CS matrix occupied this volume. The optimum water content was essentially the same for PP fiber since they do not absorb water and do not represent a significant percentage of the total mass of the mixture (approximately 0.75%). On the other hand, there was a slight decrease in the optimum water content due to the inclusion of steel fibers since the total mass of the mixture is significantly increased by the volume of steel fiber (approximately 6%). Firstly, there were decreases of 0.04%, 2.03%, and 3.90% in the dry unit weight for the mixtures with PP fiber at the contents of 0.5, 1.0, and 1.5%, respectively. According to Onyejekwe & Ghataora [23] and Tran et al. [34], this tendency of higher PP fiber content resulting in dry unit weight reduction can be explained by the fact that flexible fibers absorb a part of the compaction energy. Flexural Behavior of Load-Deflection Curves The steel fiber mixtures had an increase of 1.61, 3.49, and 4.51% in dry unit weight for samples with fiber contents of 0.5, 1.0, and 1.5%, respectively. Although there is no research focusing on the steel fiber effect in mixtures with soil, these increases were expected since the volume of steel in the mix corresponds to a much larger mass than if the CS matrix occupied this volume. The optimum water content was essentially the same for PP fiber since they do not absorb water and do not represent a significant percentage of the total mass of the mixture (approximately 0.75%). On the other hand, there was a slight decrease in the optimum water content due to the inclusion of steel fibers since the total mass of the mixture is significantly increased by the volume of steel fiber (approximately 6%). Firstly, there were decreases of 0.04%, 2.03%, and 3.90% in the dry unit weight for the mixtures with PP fiber at the contents of 0.5, 1.0, and 1.5%, respectively. According to Onyejekwe & Ghataora [23] and Tran et al. [34], this tendency of higher PP fiber content resulting in dry unit weight reduction can be explained by the fact that flexible fibers absorb a part of the compaction energy. Flexural Behavior of Load-Deflection Curves The steel fiber mixtures had an increase of 1.61, 3.49, and 4.51% in dry unit weight for samples with fiber contents of 0.5, 1.0, and 1.5%, respectively. Although there is no research focusing on the steel fiber effect in mixtures with soil, these increases were expected since the volume of steel in the mix corresponds to a much larger mass than if the CS matrix occupied this volume. The optimum water content was essentially the same for PP fiber since they do not absorb water and do not represent a significant percentage of the total mass of the mixture (approximately 0.75%). On the other hand, there was a slight decrease in the optimum water content due to the inclusion of steel fibers since the total mass of the mixture is significantly increased by the volume of steel fiber (approximately 6%). The cemented soil without fiber showed brittle behavior, that is, the load is linearly increasing until it reaches its maximum and drops drastically to zero. The cemented soil with fiber samples presented LOP deflection close to those without fiber, indicating that the linear trend until the first cracks are not affected by the fiber. However, after this point, the load capacity of the fibers specimens continues to increase, exhibiting a deflectionhardening or softening behavior, characteristic of ductile materials and dependent on the fiber contents [38]. Flexural Behavior of Load-Deflection Curves The deflection value at the MOR point is affected by the fiber type. Therefore, the PP fiber presented MOR deflections ranging from 1 mm to 3 mm, while the steel fiber had MOR deflection between 0-1 mm. The post-cracking response had different behavior due to the fiber type. PP fiber maintains higher strengths close to their peak strength, while steel fiber has a more considerable loss in strength to higher deflections. Therefore, PP fibers are mobilized with more significant deformations than steel fibers. The deflection hardening behavior of the composite can be enhanced by increasing the gap between the LOP and MOR and their corresponding deflections [36]. Regarding the load-deflection curve response, at 0.5% of fiber volume, both fiber mixtures showed a deflection-softening with a low load-carrying capacity after first cracking. The increase in fiber content contributes to a rise in the deflection-hardening behavior of the material, and for a complete deflection-hardening behavior, with an immediate rise in load after first cracking (LOP), 1.5% fiber content was necessary for both PP and steel fibers. In addition, it is noted that a curing time of 28 days was required to fully develop this characteristic, especially for steel fibers. The increase in curing time reduces the load drop after the first crack (LOP) and may represent an increase in the bond strength and the contribution of the fiber in the matrix. Furthermore, the peak strength (MOR) deflection decreases in function of the curing time. In other words, less deformation is required to activate the pull-out forces at the fiber soilcement interface with the increase in curing time. The cemented soil without fiber showed brittle behavior, that is, the load is linearly increasing until it reaches its maximum and drops drastically to zero. The cemented soil with fiber samples presented LOP deflection close to those without fiber, indicating that the linear trend until the first cracks are not affected by the fiber. However, after this point, the load capacity of the fibers specimens continues to increase, exhibiting a deflectionhardening or softening behavior, characteristic of ductile materials and dependent on the fiber contents [38]. The deflection value at the MOR point is affected by the fiber type. Therefore, the PP fiber presented MOR deflections ranging from 1 mm to 3 mm, while the steel fiber had MOR deflection between 0-1 mm. The post-cracking response had different behavior due to the fiber type. PP fiber maintains higher strengths close to their peak strength, while steel fiber has a more considerable loss in strength to higher deflections. Therefore, PP fibers are mobilized with more significant deformations than steel fibers. The deflection hardening behavior of the composite can be enhanced by increasing the gap between the LOP and MOR and their corresponding deflections [36]. Regarding the load-deflection curve response, at 0.5% of fiber volume, both fiber mixtures showed a deflection-softening with a low load-carrying capacity after first cracking. The increase in fiber content contributes to a rise in the deflection-hardening behavior of the material, and for a complete deflection-hardening behavior, with an immediate rise in load after first cracking (LOP), 1.5% fiber content was necessary for both PP and steel fibers. In addition, it is noted that a curing time of 28 days was required to fully develop this characteristic, especially for steel fibers. The increase in curing time reduces the load drop after the first crack (LOP) and may represent an increase in the bond strength and the contribution of the fiber in the matrix. Furthermore, the peak strength (MOR) deflection decreases in function of the curing time. In other words, less deformation is required to activate the pull-out forces at the fiber soilcement interface with the increase in curing time. The cemented soil without fiber showed brittle behavior, that is, the load is linearly increasing until it reaches its maximum and drops drastically to zero. The cemented soil with fiber samples presented LOP deflection close to those without fiber, indicating that the linear trend until the first cracks are not affected by the fiber. However, after this point, the load capacity of the fibers specimens continues to increase, exhibiting a deflectionhardening or softening behavior, characteristic of ductile materials and dependent on the fiber contents [38]. The deflection value at the MOR point is affected by the fiber type. Therefore, the PP fiber presented MOR deflections ranging from 1 mm to 3 mm, while the steel fiber had MOR deflection between 0-1 mm. The post-cracking response had different behavior due to the fiber type. PP fiber maintains higher strengths close to their peak strength, while steel fiber has a more considerable loss in strength to higher deflections. Therefore, PP fibers are mobilized with more significant deformations than steel fibers. The deflection hardening behavior of the composite can be enhanced by increasing the gap between the LOP and MOR and their corresponding deflections [36]. Regarding the load-deflection curve response, at 0.5% of fiber volume, both fiber mixtures showed a deflection-softening with a low load-carrying capacity after first cracking. The increase in fiber content contributes to a rise in the deflection-hardening behavior of the material, and for a complete deflection-hardening behavior, with an immediate rise in load after first cracking (LOP), 1.5% fiber content was necessary for both PP and steel fibers. In addition, it is noted that a curing time of 28 days was required to fully develop this characteristic, especially for steel fibers. The increase in curing time reduces the load drop after the first crack (LOP) and may represent an increase in the bond strength and the contribution of the fiber in the matrix. Furthermore, the peak strength (MOR) deflection decreases in function of the curing time. In other words, less deformation is required to activate the pull-out forces at the fiber soil-cement interface with the increase in curing time. Figure 7 shows the effect of fiber type, fiber content, and curing time on the values of LOP. The increase in PP fiber content has a negligible impact on the LOP strength of the material. Two suppositions can explain this: the first relies on the decrease in dry unit weight, resulting in a specimen with less cement and fewer compacted particles in the mixture due to loss of compaction energy by the PP fibers. Moreover, the second is attributed to the difficulty in inserting the PP fibers in the matrix, which, in large quantities, can agglomerate, creating a barrier that hinders the adhesion between the CS particles. In other words, the CSF depends mainly on the strength of the CS matrix rather than the fiber cohesion [46]. Figure 7 shows the effect of fiber type, fiber content, and curing time on the values of LOP. The increase in PP fiber content has a negligible impact on the LOP strength of the material. Two suppositions can explain this: the first relies on the decrease in dry unit weight, resulting in a specimen with less cement and fewer compacted particles in the mixture due to loss of compaction energy by the PP fibers. Moreover, the second is attributed to the difficulty in inserting the PP fibers in the matrix, which, in large quantities, can agglomerate, creating a barrier that hinders the adhesion between the CS particles. In other words, the CSF depends mainly on the strength of the CS matrix rather than the fiber cohesion [46]. Differently from the PP fiber, the steel fiber presents slightly higher LOP strength with the increase in its content. However, these values did not follow a specific trend, and the rate of increase in LOP is not significant, which implies that the steel fibers used in this study did not interfere with the LOP strength of the mixtures. Figure 8 presents the average values of the peak strength, which is the higher value between MOR and LOP strength, for the mixtures with PP and steel fibers, considering the three different curing times. First, it is noted that all mixtures with fibers had higher peak strength than the CS matrix (0.58 MPa). At 3 days, the insertion of fibers in the contents of 0.5, 1.0, and 1.5% contributed to the increases of 16.5, 50.7, and 67.0% in the peak strength, respectively, for the PP fibers. On the other hand, the steel fibers had an increase of 3.8, 24.4, and 31.8% for the contents of 0.5, 1.0, and 1.5%, respectively. This result indicates that the rate of increase in peak strength of the PP fiber is higher in comparison with the steel fiber, which implies that PP fibers can contribute to the load faster at the same fiber content. At 7 days, the contribution rate of PP fibers to peak strength decreased significantly (5.2/23.2/38.5%) compared to 3 days. In contrast, steel fibers maintained the contribution Differently from the PP fiber, the steel fiber presents slightly higher LOP strength with the increase in its content. However, these values did not follow a specific trend, and the rate of increase in LOP is not significant, which implies that the steel fibers used in this study did not interfere with the LOP strength of the mixtures. Figure 8 presents the average values of the peak strength, which is the higher value between MOR and LOP strength, for the mixtures with PP and steel fibers, considering the three different curing times. First, it is noted that all mixtures with fibers had higher peak strength than the CS matrix (0.58 MPa). At 3 days, the insertion of fibers in the contents of 0.5, 1.0, and 1.5% contributed to the increases of 16.5, 50.7, and 67.0% in the peak strength, respectively, for the PP fibers. On the other hand, the steel fibers had an increase of 3.8, 24.4, and 31.8% for the contents of 0.5, 1.0, and 1.5%, respectively. This result indicates that the rate of increase in peak strength of the PP fiber is higher in comparison with the steel fiber, which implies that PP fibers can contribute to the load faster at the same fiber content. Figure 7 shows the effect of fiber type, fiber content, and curing time on the values of LOP. The increase in PP fiber content has a negligible impact on the LOP strength of the material. Two suppositions can explain this: the first relies on the decrease in dry unit weight, resulting in a specimen with less cement and fewer compacted particles in the mixture due to loss of compaction energy by the PP fibers. Moreover, the second is attributed to the difficulty in inserting the PP fibers in the matrix, which, in large quantities, can agglomerate, creating a barrier that hinders the adhesion between the CS particles. In other words, the CSF depends mainly on the strength of the CS matrix rather than the fiber cohesion [46]. Differently from the PP fiber, the steel fiber presents slightly higher LOP strength with the increase in its content. However, these values did not follow a specific trend, and the rate of increase in LOP is not significant, which implies that the steel fibers used in this study did not interfere with the LOP strength of the mixtures. Figure 8 presents the average values of the peak strength, which is the higher value between MOR and LOP strength, for the mixtures with PP and steel fibers, considering the three different curing times. First, it is noted that all mixtures with fibers had higher peak strength than the CS matrix (0.58 MPa). At 3 days, the insertion of fibers in the contents of 0.5, 1.0, and 1.5% contributed to the increases of 16.5, 50.7, and 67.0% in the peak strength, respectively, for the PP fibers. On the other hand, the steel fibers had an increase of 3.8, 24.4, and 31.8% for the contents of 0.5, 1.0, and 1.5%, respectively. This result indicates that the rate of increase in peak strength of the PP fiber is higher in comparison with the steel fiber, which implies that PP fibers can contribute to the load faster at the same fiber content. At 7 days, the contribution rate of PP fibers to peak strength decreased significantly (5.2/23.2/38.5%) compared to 3 days. In contrast, steel fibers maintained the contribution At 7 days, the contribution rate of PP fibers to peak strength decreased significantly (5.2/23.2/38.5%) compared to 3 days. In contrast, steel fibers maintained the contribution rate for the volumes of 0.5 and 1.0% (8.7 and 22.1%) and significantly increased for 1.5% of fiber content (51.8%). The increase in fiber content provided higher peak strength in the same proportion for 28 curing days compared with 7 curing days. Therefore, it is concluded that both PP and steel fiber provided similar increases in peak strength at the same volume content for 7 and 28 curing days. However, for 3 curing days, the steel fiber needed a stronger and more rigid matrix to be mobilized. Ductility Index The results calculated from the ductility index are shown in Figure 9. These responses point out that the mixtures with PP fibers have a higher ductility index (DI) than those with steel fibers due to the higher MOR deflection achieved in these mixtures. The PP fiber mixtures reached DI values of 93-126 for 3 days, 49-116 for 7 days, and 51-71 for 28 days. These findings are different from Jamsawang et al. [7], which reported that PP fibers react to load slower than steel fibers and the values tend to decrease, and these increase with increasing fiber contents. There are two tendencies concerning the PP fiber inclusion in the mixture: the increase in fiber content increases DI values, and the increase in curing time reduces DI values. rate for the volumes of 0.5 and 1.0% (8.7 and 22.1%) and significantly increased for 1.5% of fiber content (51.8%). The increase in fiber content provided higher peak strength in the same proportion for 28 curing days compared with 7 curing days. Therefore, it is concluded that both PP and steel fiber provided similar increases in peak strength at the same volume content for 7 and 28 curing days. However, for 3 curing days, the steel fiber needed a stronger and more rigid matrix to be mobilized. Ductility Index The results calculated from the ductility index are shown in Figure 9. These responses point out that the mixtures with PP fibers have a higher ductility index (DI) than those with steel fibers due to the higher MOR deflection achieved in these mixtures. The PP fiber mixtures reached DI values of 93-126 for 3 days, 49-116 for 7 days, and 51-71 for 28 days. These findings are different from Jamsawang et al. [7], which reported that PP fibers react to load slower than steel fibers and the values tend to decrease, and these increase with increasing fiber contents. There are two tendencies concerning the PP fiber inclusion in the mixture: the increase in fiber content increases DI values, and the increase in curing time reduces DI values. The steel fiber inclusion provides DI values of 20-35 for 3 days, 12-24 for 7 days, and 13-35 for 28 days. Excluding the steel fiber mixtures at 1.0% for 28 days of curing time, two tendencies are observed: first is that the increase in fiber content decreases DI values, and the second is that the curing time also reduces DI values. Similar results are reported by Jamsawang et al. [7], in which the steel fiber has a low DI due to its high stiffness, allowing it to be quite effective in carrying the peak load at a small deflection. Residual Strength The residual strength represents the ability of fiber-reinforced concrete to sustain load after the first crack at different specific deflections [12]. The results are presented in Figures 10-12. The residual strength is one of the main benefits of CSR, as the fiber bridging effect helps control the energy release rate. Thus, CSR retains its ability to carry load after the peak (residual load) [12]. Comparing PP and steel fibers, though they have similar values, they do not show the same tendency. In general, it can be observed that the mixtures with PP fibers were able to maintain or increase residual strength with increased deflection, even though the first peak strength was lower than those with steel fibers. In contrast, the steel fibers decrease the residual load after large deflection, but they can maintain or even increase the residual strength at lower deflection values. Thus, for both fibers, it can be verified that increasing the fiber content also increases the residual strength, showing that the fiber content of 1.5% presented the maximum residual strength values. The steel fiber inclusion provides DI values of 20-35 for 3 days, 12-24 for 7 days, and 13-35 for 28 days. Excluding the steel fiber mixtures at 1.0% for 28 days of curing time, two tendencies are observed: first is that the increase in fiber content decreases DI values, and the second is that the curing time also reduces DI values. Similar results are reported by Jamsawang et al. [7], in which the steel fiber has a low DI due to its high stiffness, allowing it to be quite effective in carrying the peak load at a small deflection. Residual Strength The residual strength represents the ability of fiber-reinforced concrete to sustain load after the first crack at different specific deflections [12]. The results are presented in Figures 10-12. The residual strength is one of the main benefits of CSR, as the fiber bridging effect helps control the energy release rate. Thus, CSR retains its ability to carry load after the peak (residual load) [12]. Comparing PP and steel fibers, though they have similar values, they do not show the same tendency. In general, it can be observed that the mixtures with PP fibers were able to maintain or increase residual strength with increased deflection, even though the first peak strength was lower than those with steel fibers. In contrast, the steel fibers decrease the residual load after large deflection, but they can maintain or even increase the residual strength at lower deflection values. Thus, for both fibers, it can be verified that increasing the fiber content also increases the residual strength, showing that the fiber content of 1.5% presented the maximum residual strength values. Figure 13 exhibits the flexural modulus values obtained for the PP and steel fibers mixtures. According to the data, the PP fibers had a slight decrease in the flexural modulus compared to the CS specimen. Higher fiber content exhibited a more significant decrease in flexural modulus for the PP fibers. Thus, at 3 days of curing time, PP fiber presented the highest decrease in flexural modulus values of 4.8, 6.1, and 33.2%, respectively, at 0.5, 1.0, and 1.5% fiber content. Therefore, as the CSF matrix becomes more rigid (28 curing days), the PP fiber decreasing effect on flexural modulus is reduced to values varying from 12 to 15% at fiber contents of 1.0 and 1.5%, respectively. Figure 13 exhibits the flexural modulus values obtained for the PP and steel fibers mixtures. According to the data, the PP fibers had a slight decrease in the flexural modulus compared to the CS specimen. Higher fiber content exhibited a more significant decrease in flexural modulus for the PP fibers. Thus, at 3 days of curing time, PP fiber presented the highest decrease in flexural modulus values of 4.8, 6.1, and 33.2%, respectively, at 0.5, 1.0, and 1.5% fiber content. Therefore, as the CSF matrix becomes more rigid (28 curing days), the PP fiber decreasing effect on flexural modulus is reduced to values varying from 12 to 15% at fiber contents of 1.0 and 1.5%, respectively. Figure 13 exhibits the flexural modulus values obtained for the PP and steel fibers mixtures. According to the data, the PP fibers had a slight decrease in the flexural modulus compared to the CS specimen. Higher fiber content exhibited a more significant decrease in flexural modulus for the PP fibers. Thus, at 3 days of curing time, PP fiber presented the highest decrease in flexural modulus values of 4.8, 6.1, and 33.2%, respectively, at 0.5, 1.0, and 1.5% fiber content. Therefore, as the CSF matrix becomes more rigid (28 curing days), the PP fiber decreasing effect on flexural modulus is reduced to values varying from 12 to 15% at fiber contents of 1.0 and 1.5%, respectively. Figure 13 exhibits the flexural modulus values obtained for the PP and steel fibers mixtures. According to the data, the PP fibers had a slight decrease in the flexural modulus compared to the CS specimen. Higher fiber content exhibited a more significant decrease in flexural modulus for the PP fibers. Thus, at 3 days of curing time, PP fiber presented the highest decrease in flexural modulus values of 4.8, 6.1, and 33.2%, respectively, at 0.5, 1.0, and 1.5% fiber content. Therefore, as the CSF matrix becomes more rigid (28 curing days), the PP fiber decreasing effect on flexural modulus is reduced to values varying from 12 to 15% at fiber contents of 1.0 and 1.5%, respectively. On the contrary, considering the 1.0% fiber content, the steel fiber provided a slight increase in flexural modulus of approximately 4, 5, and 10%, respectively, at 3, 7, and 28 curing days. However, in general, there were no significant differences or trends in flexural modulus for the CSF with steel fibers when compared to the CS samples. Flexural Static Modulus Comparing these results with the polymer-treated fiber cement reported by Onyejekwe and Ghataora [23], a similar trend is observed as the increase in curing time increases the flexural modulus, and the PP fiber results were the ones with less strength since it tends toward a formation of surfaces of weakness at the fiber-soil interface, leading to a reduction in flexural strength. The stiffness evaluation based on the flexural modulus of a cemented base material is an essential component to pavement design and mechanistic analysis. The cemented base materials absorb most of the traffic load; the higher the stiffness, the higher the tensile stresses in the base. These results suggest that PP fibers in excess (1.5%) provide a layer that absorbs fewer loads concerning the lower stiffness, which prevents the base from fatigue cracking but can also overload the other layers. Toughness The toughness is based on the area below the graphs of the load-deflection curves corresponding to L/150, L/300, and L/600. Figure 14a-c presents the toughness values with the fiber content increment for the mixtures with PP and steel fibers. On the contrary, considering the 1.0% fiber content, the steel fiber provided a slight increase in flexural modulus of approximately 4, 5, and 10%, respectively, at 3, 7, and 28 curing days. However, in general, there were no significant differences or trends in flexural modulus for the CSF with steel fibers when compared to the CS samples. Comparing these results with the polymer-treated fiber cement reported by Onyejekwe and Ghataora [23], a similar trend is observed as the increase in curing time increases the flexural modulus, and the PP fiber results were the ones with less strength since it tends toward a formation of surfaces of weakness at the fiber-soil interface, leading to a reduction in flexural strength. The stiffness evaluation based on the flexural modulus of a cemented base material is an essential component to pavement design and mechanistic analysis. The cemented base materials absorb most of the traffic load; the higher the stiffness, the higher the tensile stresses in the base. These results suggest that PP fibers in excess (1.5%) provide a layer that absorbs fewer loads concerning the lower stiffness, which prevents the base from fatigue cracking but can also overload the other layers. Toughness The toughness is based on the area below the graphs of the load-deflection curves corresponding to L/150, L/300, and L/600. Figure 14a-c presents the toughness values with the fiber content increment for the mixtures with PP and steel fibers. On the contrary, considering the 1.0% fiber content, the steel fiber provided a slight increase in flexural modulus of approximately 4, 5, and 10%, respectively, at 3, 7, and 28 curing days. However, in general, there were no significant differences or trends in flexural modulus for the CSF with steel fibers when compared to the CS samples. Comparing these results with the polymer-treated fiber cement reported by Onyejekwe and Ghataora [23], a similar trend is observed as the increase in curing time increases the flexural modulus, and the PP fiber results were the ones with less strength since it tends toward a formation of surfaces of weakness at the fiber-soil interface, leading to a reduction in flexural strength. The stiffness evaluation based on the flexural modulus of a cemented base material is an essential component to pavement design and mechanistic analysis. The cemented base materials absorb most of the traffic load; the higher the stiffness, the higher the tensile stresses in the base. These results suggest that PP fibers in excess (1.5%) provide a layer that absorbs fewer loads concerning the lower stiffness, which prevents the base from fatigue cracking but can also overload the other layers. Toughness The toughness is based on the area below the graphs of the load-deflection curves corresponding to L/150, L/300, and L/600. Figure 14a-c presents the toughness values with the fiber content increment for the mixtures with PP and steel fibers. The results show that the toughness depends upon fiber content, type, and curing time. Similar to this study, Kim et al. [27] and Jamsawang et al. [6] reported that all CSF had higher toughness than the CS, and fiber contents increased the toughness of both fibers. Thus, this study showed that increasing the curing time did not exhibit differences for L/150 and L/300, but it increased the toughness values for high deflection values. CSF samples that exhibited deflection-hardening behavior performed better than those exhibiting deflection-softening because they absorbed more energy after cracking [6,27]. Different from what is reported by Jamsawang et al. [7], at 3 and 7 days of curing time, the PP and steel fibers had similar values of toughness for 0.5 and 1.0 mm. Thus, at 2.0 mm, the PP fibers had better performance when compared to steel fibers. However, after 28 days of curing time, the results started to be similar to those reported by Jamsawang et al. [7], where steel fibers' toughness values in high deflection (2.0 mm) significantly increased and reached values near the PP fibers. This can be explained, according to data, that more rigid matrix steel fibers can absorb energy and have higher contributions to toughness. In contrast, the PP fiber mixtures seem to reach an equilibrium with an optimum fiber content of 1.5% and adding values beyond 1.5% may result in no significant increase in toughness. However, this trend is not observed for the steel fiber mixtures, which continue to increase strength beyond the fiber content increment. This explanation is only based on the analyzed data; further investigations are needed. Effect of the Fiber Inclusion on Mode of Failure To conclude the analysis, the crack patterns are one of the main parameters for characterizing the performance of a fiber type [7,23]. The images from Figure 15 show the mode of failure verified for the CS and CSF mixtures for 28 curing days after the loading procedure. Firstly, all specimens fail due to the tension at the bottom of the tested beam, and all the cracks propagate from bottom to the top. The unreinforced specimen fails with a failure plane at approximately the midspan. On the contrary, it is observed that all reinforced specimens controlled the crack formation, avoiding the brittle behavior that occurs in the unreinforced sample. The results show that the toughness depends upon fiber content, type, and curing time. Similar to this study, Kim et al. [27] and Jamsawang et al. [6] reported that all CSF had higher toughness than the CS, and fiber contents increased the toughness of both fibers. Thus, this study showed that increasing the curing time did not exhibit differences for L/150 and L/300, but it increased the toughness values for high deflection values. CSF samples that exhibited deflection-hardening behavior performed better than those exhibiting deflection-softening because they absorbed more energy after cracking [6,27]. Different from what is reported by Jamsawang et al. [7], at 3 and 7 days of curing time, the PP and steel fibers had similar values of toughness for 0.5 and 1.0 mm. Thus, at 2.0 mm, the PP fibers had better performance when compared to steel fibers. However, after 28 days of curing time, the results started to be similar to those reported by Jamsawang et al. [7], where steel fibers' toughness values in high deflection (2.0 mm) significantly increased and reached values near the PP fibers. This can be explained, according to data, that more rigid matrix steel fibers can absorb energy and have higher contributions to toughness. In contrast, the PP fiber mixtures seem to reach an equilibrium with an optimum fiber content of 1.5% and adding values beyond 1.5% may result in no significant increase in toughness. However, this trend is not observed for the steel fiber mixtures, which continue to increase strength beyond the fiber content increment. This explanation is only based on the analyzed data; further investigations are needed. Effect of the Fiber Inclusion on Mode of Failure To conclude the analysis, the crack patterns are one of the main parameters for characterizing the performance of a fiber type [7,23]. The images from Figure 15 show the mode of failure verified for the CS and CSF mixtures for 28 curing days after the loading procedure. Firstly, all specimens fail due to the tension at the bottom of the tested beam, and all the cracks propagate from bottom to the top. The unreinforced specimen fails with a failure plane at approximately the midspan. On the contrary, it is observed that all reinforced specimens controlled the crack formation, avoiding the brittle behavior that occurs in the unreinforced sample. The second consideration is that the specimen's cracking pattern is associated with the fiber content of the sample; the higher the content, the smaller the width and the length of the cracks for both fibers. The result images also confirm the contribution of each fiber in changing the flexural behavior from deflection-softening, as shown in Figure 15 at 0.5% fiber content, or deflection-hardening, as can be seen in Figure 15 at 1.5% fiber content. Finally, it is observed that, for 1.5% of fiber content, the PP fibers had the prevalence of multiple cracks instead of one single crack. On the contrary, the steel fibers had a welldefined crack. The load-deflection curves of the specimens can explain this fact: while PP fiber mixtures have higher MOR deflections, suggesting they are near the multiple crack curve segment, the steel fibers have lower MOR deflections and are already in the declining curve segment. These observations showed that the steel fibers effectively contribute to the crack after the macrocrack is already formed, redistributing the stress through the fiber matrix [6,7]. Conclusions This study investigated the influence of fiber on the flexural performance of a clayed sand soil stabilized with 6% cement for base pavement application. Macro steel and micro polypropylene fibers were blended in the mixtures at 0.5, 1.0, and 1.5% of volume content to evaluate the flexural performance on three different curing periods. Based on the experimental results, the following conclusions are drawn: • Both fibers could control cracking propagation in the CS, avoiding the bending beam being divided into two parts. Adding fiber and the matrix becoming more rigid with the curing time changed the material behavior from brittle to deflection-hardening for both fibers. For this CS combination, 1.5% of fiber content for both fibers were necessary for a complete deflection-hardening behavior and a prevalence of smaller cracks. • The LOP strength and flexural modulus were not influenced by the inclusion of steel fiber, which is reasonable since the matrix is intact and, consequently, there is no relative movement between the soil-cement particles and the fiber. There is a loss in the LOP strength and flexural modulus by the PP fiber inclusion in the matrix due to compaction energy loss when inserting PP fibers in the matrix and the low adhesion of PP fiber in the soil-cement. • PP fiber inclusion provided higher MOR deflection than steel fiber. It is a characteristic that must be considered in pavement because large deformations may not represent a safe structure. Furthermore, the PP fiber mixtures presented higher peak strength for 3 days, confirming that the flexible fibers have a higher contribution in a matrix with lower strength and stiffness and that steel fibers need a more robust matrix to be mobilized. • Concerning ductility and residual strength, mixtures with PP fibers could retain the loads after cracking more than steel fibers, avoiding cracks formation. However, these contributions were significant for higher deflections, which may not entirely be exploited in a pavement base due to deformation limits. • In toughness performance, PP fiber addition had, in general, higher increases for 2 mm of deflection for 3 and 7 days of curing time, while steel fiber significantly increased toughness for 28 days of curing time. Both fiber types had higher toughness values with the increase in fiber content. According to the tendencies observed, the contribution of PP fiber in toughness reached a limit for the matrix, while steel fibers may still have larger values by adding fiber content to the matrix. In summary, the flexural tests confirm the influence of steel fibers on the properties of a cemented matrix, promoting gains in peak strength at small deflections without changing the initial strength and stiffness. The matrix being more rigid implies an increase in this contribution, and there are also benefits to its use concerning ductility, cracking control, and residual strength in the material. In contrast, the PP fiber mixtures obtained greater ductility index, residual strengths, and cracking control than steel fiber mixtures. The increase in deflection highlights the PP fiber contributions. However, as the fiber content increases, the PP fiber presents lower initial strength and flexural modulus. Moreover, the increase in curing time decreases the ratio of the PP fiber contributions, indicating that this type of fiber had better performance in a matrix with lower strength and stiffness. On the other hand, steel fibers need a more robust matrix to be mobilized.
2023-06-11T05:09:14.996Z
2023-06-01T00:00:00.000
{ "year": 2023, "sha1": "e3978acf98a511b36de64222b2b37b29ea3e5456", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/ma16114185", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e3978acf98a511b36de64222b2b37b29ea3e5456", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
26430893
pes2o/s2orc
v3-fos-license
Discontinuous Petrov-Galerkin boundary elements Generalizing the framework of an ultra-weak formulation for a hypersingular integral equation on closed polygons in [N. Heuer, F. Pinochet, arXiv 1309.1697 (to appear in SIAM J. Numer. Anal.)], we study the case of a hypersingular integral equation on open and closed polyhedral surfaces. We develop a general ultra-weak setting in fractional-order Sobolev spaces and prove its well-posedness and equivalence with the traditional formulation. Based on the ultra-weak formulation, we establish a discontinuous Petrov-Galerkin method with optimal test functions and prove its quasi-optimal convergence in related Sobolev norms. For closed surfaces, this general result implies quasi-optimal convergence in the L^2-norm. Some numerical experiments confirm expected convergence rates. Introduction In recent years, the discontinuous Petrov-Galerkin (DPG) method with optimal test functions has drawn some attention. In most cases it is based on an ultra-weak formulation (cf. Després and Cessenat [10,19]) that one obtains by integrating by parts element-wise the underlying first order system and by replacing boundary terms with new unknowns (cf. Botasso, Micheletti and Sacco [4]). Choosing appropriate test functions, discrete stability can be deduced from the stability of the continuous formulation. In this combination, the method has been proposed and analyzed by Demkowicz and Gopalakrishnan, cf. [15,16], with particular emphasis on its possible robustness for singularly perturbed problems [5,7,11,17,18]. In this paper, we develop a DPG method with optimal test functions to numerically solve a hypersingular integral equation on (open or closed) polyhedral surfaces. Our model problem governs the Laplacian and there is no reason to be concerned about stability (the standard Galerkin method is stable). However, as for partial differential equations, our hypothesis is that this strategy can lead to stable approximations for singularly perturbed integral equations stemming, e.g., from acoustic scattering or almost incompressible linear elasticity. We refer to [9,12,13,20,37] for some discussions of stability issues in the boundary element Galerkin method. We follow the abstract framework from [31] where a DPG method for a hypersingular integral equation has been studied in the two-dimensional case of a closed polygonal curve. Here, we deal with the three-dimensional polyhedral case and consider, in particular, open surfaces. Open curves were excluded in [31]. The situation of a closed surface is most convenient since, in that case, the whole formulation is set in simplest Sobolev spaces. The solution is approximated in L 2 and test functions are taken from L 2 and piecewise H 1 -spaces. In this way (apart from a skeleton variable in H −1/2 , which is not being controlled) we completely avoid fractional-order Sobolev spaces. For the first time, a hypersingular integral equation in three dimensions is weakly formulated and approximated in L 2 and H 1 -spaces without relying on complicated dualities usually induced by elliptic operators of order one. The situation on an open surface is slightly more involved. This is due to the fact that, in this case, solutions have strong edge singularities so that corresponding H 1 -estimates (needed for the analysis of the dual problem) must be avoided. Nevertheless, also in this extreme case we are able to present a well-posed formulation and prove quasi-optimal convergence of the DPG method in standard Sobolev spaces. These spaces are of fractional order, but close to L 2 and H 1 so that no variational crimes arise. Apart from the new mathematical framework that provides optimal results in standard Sobolev norms, there are practical advantages of the DPG method which apply also in the case of hypersingular operators. We recall the list from [31]. • The matrices of linear systems used for the approximation of optimal test functions and for error calculation are sparse. • System matrices are symmetric and positive definite. • Error control is inherent since errors in the energy norm can be calculated through the implementation of the trial-to-test operator. • Since norms are localizable, the energy norm of the error gives local information which can be used to steer adaptive refinements. • Error estimates and stability hold for any combination of meshes and polynomial degrees so that hp methods do not require a new analysis. • Since approximation spaces can be discontinuous, one has full flexibility for h and p adaptivity. For a detailed discussion of these facts we refer to the previously mentioned references on the DPG method with optimal test functions. There are also limitations of the DPG method. Most importantly, despite of using localizable norms, stiffness matrices are still densely populated when discretizing non-local operators. The global coupling of unknowns enters through the trial-to-test operator since it carries the information of the original operator. Secondly, optimal test functions must be approximated (except for particular cases, see [31]). The influence of this approximation on stability and convergence estimates has been analyzed for partial differential equations, see [24,30], but is an open problem when dealing with integral operators. Finally, strict implementation of our method on open surfaces requires the use of inner products from (slightly) fractional-order Sobolev spaces (namely, of orders −ǫ and 1 − ǫ for fixed ǫ > 0). This is not acceptable in practice and one simply selects ǫ = 0 (corresponding to the parameter s = 1/2 in our estimates). In this limit case we still have quasi-optimal convergence, but the error is controlled in the energy norm defined by the problem rather than in L 2 (see Corollary 18 at the end of Section 4 for this result). Some more comments are made in Remark 19, after the corollary. For convenience of the reader, we briefly recall the abstract framework of the DPG method with optimal test functions. For details and proofs we refer to [16,39]. For Hilbert spaces U , V , bilinear form b : be a well-posed variational formulation. For an approximation space U hp ⊂ U , the Petrov-Galerkin method with optimal test functions consists in calculating u hp ∈ U hp such that Here, Θ : U → V is the trial-to-test operator defined by with inner product · , · V in V . Standard arguments from functional analysis show that u hp is the best approximation of u in the energy norm defined by Now, practicality and efficiency of the method hinge on several ingredients. • The trial-to-test operator Θ is localized by using discontinuous finite elements (the Petrov-Galerkin method is then called discontinuous Petrov-Galerkin method, DPG). Of course, this goes in hand with an appropriate bilinear form b(·, ·). • One wants to select an appropriate norm in U to control the error. For a given V -norm, the energy norm · U is not necessarily convenient. Using a different norm in U , duality via b(·, ·) induces a norm in V that is different from the inner-product norm. • Efficient implementation of Θ not only requires discontinuous finite elements but also a localizable inner product in V . This conflicts with the selection of a norm in U . • Even a localized version of Θ cannot be exactly implemented since local spaces are still infinite-dimensional. Therefore, in practice the trial-to-test operator is approximated by selecting finite-dimensional subspaces V r ⊂ V and by defining an approximating operator (This issue is not further analyzed in this paper. We refer to the previously mentioned references [24,30] that deal with partial differential equations.) Principal focus of DPG analysis is to deal with conflicting selections of norms in U and V . The aim is to choose a norm in U that is appropriate for the problem under consideration, and to find a corresponding V -norm that (i) allows for an efficient implementation of the DPG method and (ii) that guarantees a robust error estimate with appropriate convergence order. In this paper, we propose and analyze a DPG method with optimal test functions for a hypersingular integral equation on polyhedral surfaces. First principal step is to develop a wellposed ultra-weak variational formulation. This is done in Section 4.1. In Section 4.2 we analyze the equivalence of different norms in the ansatz space U and the test space V . Solvability (Theorem 14) and stability (Theorem 15) of the ultra-weak formulation is proved in Section 4.3. In Section 4.4 we present the DPG method and our main result (Theorem 17) which is a general Céa estimate. All these results depend on a parameter s that serves to select specific norms in U and V , the most convenient case being s = 1/2. However, for an open surface there are some complications with s = 1/2. This situation is being considered in Corollaries 16 and 18. The remaining parts of this paper are as follows. In the next section we introduce the model problem and define partitions of the surface. For the convenience of the reader, in Section 2.1 we resume the simplest case (parameter s = 1/2) for a closed surface and present the corresponding main result (Theorem 1) which is a particular case of Theorem 17 from Section 4. In Section 3 we collect all the technical results from Sobolev spaces that are not directly related to the DPG analysis. This includes properties of integral operators and regularity of solutions. The principal part is Section 4, as discussed above. Some numerical experiments are presented in the last section. Throughout the paper, a b means that a ≤ cb with a generic constant c > 0 that is independent of involved parameters and functions. Similarly, we use the notation a b and a ≃ b. Model problem and the closed-surface case Let Γ be the boundary of a simply-connected polyhedral domain Ω, or a connected union of some of its faces. Our model problem is the hypersingular integral equation on Γ, Here, n is the exterior unit normal vector on ∂Ω, and f is a given function. In the case that Γ = ∂Ω, (1) represents the Neumann problem for the Laplacian in R 3 \ Ω, when selecting f = (1/2 − K ′ )v with Neumann datum v and K ′ being the adjoint of the so-called double-layer operator, cf. [32]. A weak form of the hypersingular integral equation is where The space H 1/2 (Γ) is the trace space of H 1 (Ω) when Γ = ∂Ω, and consists of trace functions that vanish on ∂Ω \ Γ when Γ = ∂Ω. Furthermore, · , · Γ denotes the L 2 (Γ)-bilinear form and its extension by duality. The rank-one term m Γ (·, ·) eliminates the kernel of W in the case of a closed surface. In the following, we also need the single layer operator V defined by and surface differential operators curl (scalar) and curl (vectorial), defined as follows. Let ∇ denote the surface gradient on ∂Ω. Following [8], the surface vector curl operator curl v = −n × ∇v can be defined on H 1/2 (∂Ω). It maps to a space of tangential vector fields. By curl , we denote the adjoint operator of curl with respect to the L 2 -bilinear form. On a smooth surface it is given by curl σ = −∇ · (n × σ), where ∇· denotes the surface divergence. An ultra-weak formulation of (1) and resulting DPG method is based on piecewise integration by parts, i.e., it hinges on a partition of Γ. Let T denote such a partition (also called mesh) that is compatible with the geometry, i.e., T is a finite set, the elements T ∈ T are mutually disjoint, open sets with T ∈T T = Γ, and every element is a subset of a face of Γ. Furthermore, for simplicity, we assume that every T ∈ T is an affine image of one of a finite set of reference elements T ⊂ R 2 . As T is finite, all its elements T are shape-regular in the sense that (with ω = T ) Let us denote by γ T := max{γ T ; T ∈ T } the shape-regularity parameter of T . Throughout we assume that γ T is bounded. We also introduce a mesh parameter by h := min{h T ; T ∈ T } with h ω := ( ω 1 ds) 1/2 for any ω ⊂ Γ. In the remainder of the paper we establish a general ultra-weak formulation of the hypersingular integral equation, prove its well-posedness, and analyze a DPG method that is based on the new formulation. In the case of an open surface, details are quite technical and require some tedious definitions of particular norms. In the following section, for ease of reading, we therefore present the case of a closed surface which can be much simplified. In particular, we propose an ultra-weak formulation, present the DPG method with optimal test functions, and announce one of our main results that establishes quasi-optimal convergence of the error in L 2 -norms. The analysis of this situation is simply a particular case in Section 4. Ultra-weak formulation and DPG method for a closed surface We need to define some Sobolev spaces which will be generalized later in this paper. We use the standard L 2 and H 1 notation, and use bold symbols to indicate (tangential) vector-valued functions and their spaces. Let us mention at this point that there are three components of the ultra-weak solution, φ, σ = Vcurl φ, and tangential components of σ on the skeleton of T . Both φ and σ will be measured in L 2 , and for the tangential components of σ we need the interface space Here, the symbol S denotes the skeleton of the partition T (the collection of boundaries ∂T with T ∈ T ) and H(curl , Γ) consists of L 2 (Γ)-functions σ with curl σ ∈ L 2 (Γ), equipped with the standard norm. On the boundary ∂T of an element T ∈ T , t is the unit tangential vector along ∂T and σ · t| ∂T is the canonical tangential trace of a function from H(curl , T ). The space Test functions will be taken from L 2 (Γ) and the piecewise H 1 -space with canonical product norm. Ultra-weak formulation. For given f ∈ L 2 (Γ) with f , 1 Γ = 0, the ultra-weak formulation of (2) reads as follows: Here, curl T is the piecewise curl -operator, and [v] indicates the jump of v across S, appearing through the definition Note thatσ| ∂T = σ · t| ∂T and that the tangential vector t on an edge of the mesh (which does not lie on ∂Γ) has different directions on the elements that share the edge. The term m Γ ensures that Γ φ = 0. We abbreviate the ultra-weak formulation as: with bilinear form b : The spaces U 1/2 and V 1/2 are Hilbert spaces with canonical norms DPG method. Consider a discrete space U hp ⊂ U 1/2 (usually piecewise polynomial functions with respect to the mesh T ) and let be a basis of U hp . Then we define the discrete test space V hp ⊂ V 1/2 with basis given by Here, Θ : There holds the following result. Theorem 1. For f ∈ L 2 (Γ) with f , 1 Γ = 0, the weak formulation (2) and ultra-weak formulation (4) are uniquely solvable and equivalent. The solution of (4) is given by σ = Vcurl φ, σ = T ∈T σ · t| ∂T and with φ being the solution of (2). Furthermore, there is a unique solution (σ hp , φ hp ,σ hp ) ∈ U hp of the DPG scheme (5). It satisfies the quasi-optimal error estimate The hidden constant in the estimate is independent of T as long as the shape-regularity parameter γ T is bounded. In what remains, a proof of this result is developed in a more general setting, comprising open and closed surfaces. The particular case of the theorem, for a closed surface, is obtained by combining the results of Theorems 14, 15, and 17 from Section 4 for the selection s = 1/2. Note that the regularity parameter C Γ (1/2) appearing in Theorems 15 and 17 is bounded in this case. Basic results from Sobolev spaces In this section we collect several technical results that will be needed for the analysis of our ultra-weak formulation and the DPG method. In the first part we give definitions of Sobolev spaces and norms, and recall properties of boundary integral operators. Furthermore, we introduce a regularity parameter C Γ (·) for the solutions of hypersingular integral equations. This parameter enters the stability estimate of the general ultra-weak formulation and also appears in the quasi-optimal error estimate of the DPG method. In the second part, Section 3.2, we present equivalences and bounds for several fractional-order Sobolev norms and recall some properties of surface differential operators, including an integration-by-parts formula. We also introduce some further Sobolev spaces that are related with the surface differential operators. Sobolev spaces, properties of integral operators, and regularity From now on, ω ⊂ ∂Ω always denotes a subset of the boundary ∂Ω of a bounded polyhedral domain Ω ⊂ R 3 such that ω itself has a Lipschitz boundary ∂ω. The spaces L 2 (ω) and H 1 (ω) with norms · L 2 (ω) , · H 1 (ω) and semi-norm |·| H 1 (ω) denote usual Sobolev spaces. Vector valued versions of these spaces and their elements will be denoted by bold symbols, e.g., σ ∈ H 1 (ω). More precisely, all the vector-valued functions and their spaces are tangential to the surface under consideration. The space H 1 (ω) (traditionally denoted by H 1 0 (ω)) is the space of H 1 (ω) functions with vanishing trace on the boundary ∂ω equipped with the norm |·| H 1 (ω) . For s ∈ (0, 1), (semi-) norms are defined by In some proofs, we will also employ Sobolev norms defined by [3]. It is well known that their norms are equivalent with the corresponding norms of Sobolev-Slobodeckij type defined previously. Though equivalence numbers depend on s and the domain ω, cf. [28,29]. Abusing notation, we set H s (Γ) : . Recall the notation h ω = ( ω 1 ds) 1/2 ; it is a measure for the diameter of ω. If a space carries the lower index 0, e.g., H s 0 (Γ) and L 2 0 (Γ), it consists of functions with vanishing integral mean in case of a closed surface Γ. If Γ is open the index 0 has no meaning. Spaces with negative order s < 0, as well as their respective norms, are defined dual to spaces with positive order −s > 0. Duality is understood with respect to the extended L 2 -inner product, which is denoted · , · with appropriate index to indicate the geometric object under consideration. For example, the dual space of H s (ω) is H −s (ω), and the norm is given by . . We will also need the corre- can be extended to serve as duality between H s (T ) and H −s (T ). As previously mentioned, spaces with bold symbols denote vector-valued versions of scalar-valued spaces, e.g., H s h (T ). Their elements are also denoted by bold symbols. For a partition {ω j } of ω ⊂ Γ into non-overlapping Lipschitz sub-domains (sub-surfaces), and s ∈ [0, 1], we will make frequent use of the estimate (which is immediate by the definition of the norms) and the following bound from [1, Theorem 4.1] (see [28], [21] for estimates in different, equivalent norms), We recall some properties of the integral operators, cf. [14,32,34,35]. Proposition 2. For s ∈ [−1/2, 1/2] the following mappings are continuous, Furthermore, both operators are elliptic, We need the following regularity parameter (or norm of the inverse of W): It is well known that is unbounded at 1/2, cf., e.g., [38]. Therefore, in the following we restrict certain regularity parameters differently, depending on whether Γ is closed or open. Furthermore, throughout this paper, we are interested in selecting large parameters s in C Γ (s). Some technical details to be analyzed get more complicated when s tends to 0. Therefore, subsequent stability estimate will not be uniform in s. To handle the different cases we define the interval and recall that Technical results According to [ with a constant C > 0 that depends only on ω. The next lemma presents scaling properties of some fractional-order Sobolev (semi-) norms. an open surface and with ω being a reference element. Then, for s ∈ [0, 1], and, for s ∈ (0, 1), All the hidden constants depend only on γ ω . Proof. The ubiquitous scaling results (13), (14) follow readily, cf. [28] for (13), and see [29] for transformation properties of semi-norms. According to [ with a constant which depends only on ω. Finally, an application of the estimates (13) shows (15). The estimate (16) is obvious on the reference element ω and is transferred to ω with the estimates (13). The next lemma presents some estimates for different Sobolev norms and establishes bounds for surface differential operators. Here, curl T is the piecewise curl -operator, i.e., curl T v| T = curl v| T for any T ∈ T and sufficiently smooth function v. Furthermore, for s ∈ (−1/2, 0], one has For s ∈ (0, 1/2] there holds and All hidden constants are independent of T for bounded shape-regularity parameter γ T . Proof. The estimates (17) follow readily, and by duality we then conclude (18). To prove (19), we decompose the polyhedral surface Γ into its faces Γ j and use (7) and Lemma 3 to bound with constants c j depending on Γ j . Now, for any face Γ j , curl : is bounded (see [22,Lemma 2.2]) so that the operator curl , being the adjoint operator of curl , is continuous from H 1/2 (Γ j ) to H −1/2 (Γ j ). By interpolation between, e.g., H 2 (Γ j ) and H 1/2 (Γ j ), one concludes that curl : is continuous for s ∈ (0, 1/2]. Therefore, continuing the estimate (22) and also making use of (6), we obtain curl τ 2 with a hidden constant that depends on the geometry of Γ. This is (19). It remains to show (20). To this end it is enough to bound, for T ∈ T and v ∈ H 1/2+s (T ), with hidden constant depending on γ T . The definition by duality of the norm on the left-hand side of (23) and estimates (13), (15) yield The operator curl continuously maps H 1 ( T ) to L 2 ( T ) and also H 1/2 ( T ) to H −1/2 ( T ), cf. [22, Lemma 2.1]. By interpolation, it also maps H 1/2+s ( T ) continuously to H −1/2+s ( T ) and hence, Here, we also used a quotient-space argument to switch to the semi-norm, cf. [29], and the scaling property (14). This yields (23). The proof of bound (21) is similar to that of (20). One just has to use that the norm · H −1/2+s (T ) is scalable of the same order as · H −1/2+s h (T ) . Discontinuous Petrov-Galerkin method This is the central section of the paper. We present and analyze a general ultra-weak formulation of the hypersingular integral equation (1) and, based on this formulation, propose a DPG method and prove its quasi-optimal convergence. The structure is as follows. In the first subsection we present the ultra-weak formulation and show that its bilinear form is definite. This is essential for the definition of norms by duality via the bilinear form. In Section 4.2 we analyze these norms, in the trial and the test space, and show essential mutual estimates (Lemmas 12 and 13). Section 4.3 establishes the equivalence of the ultra-weak formulation and the standard weak formulation (Theorem 14) and proves well-posedness of the former (Theorem 15). Finally, in Section 4.4 we recall the DPG method (in the now general setting) and present the main result (Theorem 17) which establishes a general Céa estimate for the DPG method. Norm equivalences In Lemma 12, we will investigate the relation between the norm · V s from (30) and By Lemma 9 this defines a norm for s ∈ (0, 1/2]. Inspection shows that Recall that, by definition, m Γ (v, v) 1/2 = | v , 1 Γ | if Γ is closed, and m Γ (v, v) = 0 otherwise. By duality arguments, the norm equivalence · V s ≃ · V s ,opt,α,β,ρ we aim at implies equivalence in U s of the norm · U s ,α,β,ρ from (29) and the so-called energy norm which is a norm for s ∈ (0, 1/2] due to Lemma 9. Proving the equivalence of norms in the test space requires studying the stability of the adjoint problem. This will be done in the next two lemmas. There is a number C hom (T , s) > 0, which depends only on T and s, such that Proof. Since τ = −curl T v and curl T : H 1/2+s (T ) → H −1/2+s (T ) (with bound depending on T and s), it suffices to show that where for simplicity we use the same name for the constant. The proof is indirect. The v-components of solutions (τ , v) to (38) (19). Hence the subspace of functions v ∈ H 1/2+s 0 (T ) with W T v = 0 is closed. Suppose that (39) does not hold. Then there is a sequence (v j ) ⊂ H 1/2+s 0 (T ) whose elements satisfy By Rellich's imbedding theorem (applied element-wise) there exists a subsequence, again denoted by (v j ), that converges to an element v ∈ H with α = 1, β = h −1/2+s , ρ = 1, and, for s ∈ I Γ , we have with α = C Γ (s) −1 h 1/2−s , β = C Γ (s) −1 , and ρ = C hom (T , s) −1 . Here, C hom (T , s) is the number from Lemma 11 and C Γ (s) has been defined in (11). The hidden constants in the estimates above are independent of T as long as the shape-regularity parameter γ T is bounded. Proof. Estimate (20) shows that Moreover, by the continuity of curl (19) and V (8), using (18) and (7), for any τ ∈ H conclude the proof of (40). To prove (41) we have to show that for any (τ , v) ∈ V s and s ∈ I Γ . To this end, for given (τ , v) ∈ V s define g 1 := τ + curl T v and g 2 := curl Vτ , and denote by (τ c , v c ) the solution of (34) established by Lemma 10. If Γ is open: (38). The triangle inequality and stability estimate by Lemma 10 combined with v c H 1/2+s (T ) v c H 1/2+s (Γ) by (6), and Lemma 11 show (42). If Γ is closed: Let c v := |Γ| −1 v , 1 Γ be the average of v. It satisfies (38), and we proceed as before by taking into account the established bound for c v H 1/2+s (T ) and the fact that C Γ (s) is bounded from below by a positive constant. The norm equivalence from Lemma 12 implies a corresponding equivalence in U s . and, for s ∈ I Γ , we have for any (σ, φ,σ) ∈ U s . Here, C hom (T , s) is the number from Lemma 11 and C Γ (s) has been defined in (11). The hidden constants in the estimates above are independent of T as long as the shape-regularity parameter γ T is bounded. We have seen before that σ ∈ H 1/2+s (Γ). This allows for an application of the integration-byparts formula (27), which shows the second identity in (44). Lemma 9 together with Babuška-Brezzi theory shows that the ultra-weak formulation (28) is uniquely solvable. Theorem 15 (solvability of general ultra-weak formulation). Let s ∈ I Γ and f ∈ L 2 0 (Γ) be given. The ultra-weak formulation (28) has a unique solution (σ, φ,σ) ∈ U s which is independent of s, and Here, C hom (T , s) is the number from Lemma 11 and C Γ (s) has been defined in (11). The hidden constant in the estimate is independent of f and T as long as the shape-regularity parameter γ T is bounded. Babuška-Brezzi theory [6] shows that there is a unique solution (σ, φ,σ) ∈ U s of the ultraweak formulation (28) with when choosing the previously specified parameters α, β, ρ. As both the weak and the ultra-weak formulation are uniquely solvable, we conclude that the solution (σ, φ,σ) ∈ U s is independent of s. This finishes the proof. The assertion of Theorem 15 excludes s = 1/2 when Γ is an open surface. In that case we still have stability in the energy norm. Corollary 16. Let f ∈ L 2 0 (Γ) and s ∈ (0, 1/2] be given. The ultra-weak formulation (28) has a unique solution (σ, φ,σ) ∈ U s which is independent of s, and there holds Proof. By definition of the energy norm (33), being dual with respect to b(·, ·) to · V s and being a norm by Lemma 9, and employing the estimate for the right-hand side functional from the proof of the previous theorem, we obtain Babuška-Brezzi theory proves the assertion in the space U s which is the completion of U s with respect to the energy norm. By Theorem 14, (σ, φ,σ) ∈ U s . This finishes the proof. DPG method with optimal test functions Consider a discrete space U hp ⊂ U s and denote by a basis of U hp , and denote the discrete space of test functions by V hp ⊂ V s with basis given by where Θ : U s → V s is the trial-to-test operator, defined by Here, · , · V s is the inner product in V s which induces the norm · V s in (30). The DPG method with optimal test functions as presented in [15,16], consists in finding (σ hp , φ hp ,σ hp ) ∈ U hp such that A distinguishing feature of the DPG method with optimal test functions is optimal convergence in the energy norm · U s from (33), cf. [16,Thm. 2.2], This best approximation property and Lemma 13 immediately imply the following quasi-optimal error estimate in standards norms. Theorem 17 (general Céa estimate). Given s ∈ I Γ , let (σ, φ,σ) ∈ U s and (σ hp , φ hp ,σ hp ) ∈ U hp be the solutions of the ultra-weak formulation (28) and the DPG scheme (47), respectively. Then, Here, C hom (T , s) is the number from Lemma 11 and C Γ (s) has been defined in (11). The hidden constant in the estimate is independent of T as long as the shape-regularity parameter γ T is bounded. In principle, for every s ∈ I Γ , one obtains a DPG method with corresponding error estimate. Nevertheless, only the case s = 1/2 is practical since then, the inner product in V s reduces to L 2 and piecewise H 1 -bilinear forms which are easy to implement (for the calculation of test functions and for error control). On open surfaces, taking the limit s → 1/2 lets C Γ (s) tend to infinity so that the lower bound of the error estimate from Theorem 17 is useless in this case. However, by Corollary 16, there is still a stable unique ultra-weak solution in U 1/2 and Lemma 13 provides an upper bound for the energy norm in U s for any s ∈ (0, 1/2]. Therefore, using the best approximation property (48), we can state the following error estimate that comprises the extreme case of an open surface and test space V 1/2 . Corollary 18. Let f ∈ L 2 0 (Γ) and s ∈ (0, 1/2] be given. Furthermore, let (σ, φ,σ) ∈ U s and (σ hp , φ hp ,σ hp ) ∈ U hp be the solutions of the ultra-weak formulation (28) and the DPG scheme (47), respectively. Then, The hidden constant in the estimate is independent of T as long as the shape-regularity parameter γ T is bounded. Let us conclude the theoretical part with the following remark. Remark 19. By the missing H 1 -regularity of the solution φ to the hypersingular integral equation on open surfaces, our analysis did not lead to an estimate · V 1/2 · V 1/2 ,opt,α,β,ρ for the two test norms in this case, cf. Lemma 12. For this reason, stability of the ultra-weak formulation on open surfaces, and an error estimation for the DPG method, were obtained for s = 1/2 only in the energy norm · U 1/2 . This norm is weaker than the L 2 -norm (for principal unknowns), though we did not prove that it is strictly weaker. One could use weighted test norms to obtain estimates in product norms (rather than the energy norm which mixes different components). In this way, one expects to achieve stability and error estimates in weighted L 2 -norms (for principal unknowns). Nevertheless, in this paper we stick to standard Sobolev norms which are easier to implement than weighted Sobolev norms. Numerical experiments We report on four numerical experiments. The underlying surfaces are piecewise flat, and the partitions T consist of triangles T without hanging nodes. We consider lowest order, piecewise constant functions for all components of the approximation space U hp (T ). In particular, approximations forσ on S are piecewise constant on the edges of the mesh T . Note that for all s ∈ (0, 1/2], U hp (T ) ⊂ U s . The discrete test space V hp from (45) has finite dimension but, as discussed in the introduction, V s is infinite-dimensional so that the trial-to-test operator Θ from (46) must be approximated. This gives rise to the so-called practical DPG method, cf. [24]. It consists in selecting a finite-dimensional subspace V h,p+r (T ) ⊂ V s and using, instead of Θ, the approximated operator that one obtains by solving (46) in V h,p+r (T ). As indicated by the notation, we define V h,p+r (T ) as a piecewise polynomial space on the mesh T used for U hp , and increase polynomial degrees. This is common procedure. In our experiments we increase degrees by 2 (r = 2 in the notation), which amounts to using piecewise polynomials of degree 2 for τ (which is an L 2 -function with lowest order 0), and of degree 3 for v (since the lowest order for an H 1 -function is 1). Our choice for s will always be s = 1/2. The inner product in the test space for v, δ v ∈ V 1/2 with supp(v), supp(δ v ) ⊂ T and T ∈ T . Due to this property, the calculation of the trial-to-test operator amounts to solving only local problems. In other words, the lefthand side in (46) is a block-diagonal matrix V ∈ R dim V h,p+2 (T )×dim V h,p+2 (T ) with a fixed block size of at most 10 (owing to the local spaces for v) and hence is cheap to apply and invert. If B ∈ R dim V h,p+2 (T )×dim U hp (T ) is the discretization of the bilinear form b with respect to the spaces V h,p+2 (T ) and U hp (T ), the overall system can be written as The numerical experiments were carried out in C++. To compute the term φ , curl Vτ Γ , we use integration by parts and the recurrence formulas from Maischak [33] to evaluate V. In order to compute and store the discretization efficiently, we used the library HLib 1 to employ Hierarchical Matrices [26,27] and the ACA algorithm [2]. The other parts are stored via standard sparse representations. The matrices B and V and the vector f are computed once and the system is solved by a standard iterative solver without preconditioning. In all experiments, we choose a sequence of partitions {T i } N i=0 , where T 0 is the coarsest partition and T i+1 is constructed from T i by refining every triangle into 4 smaller triangles by the so-called newest vertex bisection, which maintains shape-regularity. In all of our experiments, we plot the error in energy norm (σ − σ hp , φ − φ hp ,σ −σ hp ) U 1/2 in double logarithmic scale versus the number of triangles of the current partition. In two of the experiments we know the exact solution, such that we can also plot the errors in L 2 -norm, σ − σ hp L 2 (Γ) and φ − φ hp L 2 (Γ) . Additionally, for curiosity, we plot the error σ −σ hp L 2 (S) . Note that we have not proved any precise estimate for the approximation ofσ (the number C hom (T , s) in the Céa estimate from Theorem 17 is unknown and is expected to depend on the mesh). Moreover, the natural norm forσ is of order −1/2, not 0. Now, by Theorem 17 and standard approximation theory [36], the expected convergence order is Here, one uses the bound σ − σ d · t| S H −1/2 (S) σ − σ d H(curl ,Γ) and selects an H(curl , Γ)interpolant σ d of σ (note thatσ = σ · t| S and curl σ = f .) In all the experiments below, we observe the maximum convergence order h = O(#T −1/2 ). Since we plot squares of the errors, this is confirmed by the curves #T −1 . Experiment I. We choose Γ to be the surface of Ω which is a cube with side length 2, centered at the origin. The coarsest mesh T 0 consists of 12 triangles, 2 on every side of the cube. The exact solution φ is prescribed as a piecewise affine, globally continuous function, with values at the nodes of T 0 such that the mean value of φ vanishes. The outcome of our method with uniform mesh-refinement is shown in Figure 1. It indicates convergence with order O(h). There, and in the following, u and u hp stand respectively for (σ, φ,σ) and its approximation. Note that we do not have precise information about the resulting right-hand side function f . It is unlikely that f ∈ H 1 (Γ). Though f might be piecewise smooth in which case an order O(h) could be proved. Sine Ω is centered at the origin, the mean value of f on Γ vanishes. In this case there holds φ, f ∈ H 1 (Γ) and σ ∈ H 1 (Γ). The resulting convergence order O(h) for uniform mesh-refinement is confirmed by Figure 2. Experiment III. We choose Γ to be the square (−1, 1) 2 . The coarsest mesh consists of 4 triangles that appear when Γ is divided by its 2 diagonals. The exact solution φ is prescribed as a piecewise affine, globally continuous function, with value 1 at the node in the center of Γ, and values 0 at the nodes on the boundary of Γ. The outcome of our method with uniform mesh-refinement is shown in Figure 3 and indicates convergence order O(h). The comments from Experiment I regarding the regularity ofσ apply. For illustration we show some sample solutions φ hp in Figure 4. They perfectly approximate the selected exact solution φ by piecewise constants. Experiment IV. The surface Γ and the coarsest mesh T 0 are chosen as in Experiment III. The right-hand side is given by f (x, y, z) = 1. This is the case where φ is expected to have strong edge singularities so that φ ∈ H t (Γ) for any t < 1, but t = 1 in general, cf. [38]. Figure 5 shows the error in energy norm (squared) for uniform mesh-refinement. It confirms the order O(h), which is the limit case (it is just excluded from the theory). Also for this experiment we show some sample solutions φ hp ( Figure 6). They resemble the typical shape of φ with square root singularity dist Γ (·, ∂Γ) 1/2 .
2014-08-22T18:23:12.000Z
2014-08-22T00:00:00.000
{ "year": 2017, "sha1": "af74c3a404e93ae3dbbc0842782e11b72253fb84", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1408.5374", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "af74c3a404e93ae3dbbc0842782e11b72253fb84", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
15473940
pes2o/s2orc
v3-fos-license
Macroscopic tensile plasticity by scalarizating stress distribution in bulk metallic glass The macroscopic tensile plasticity of bulk metallic glasses (BMGs) is highly desirable for various engineering applications. However, upon yielding, plastic deformation of BMGs is highly localized into narrow shear bands and then leads to the “work softening” behaviors and subsequently catastrophic fracture, which is the major obstacle for their structural applications. Here we report that macroscopic tensile plasticity in BMG can be obtained by designing surface pore distribution using laser surface texturing. The surface pore array by design creates a complex stress field compared to the uniaxial tensile stress field of conventional glassy specimens, and the stress field scalarization induces the unusual tensile plasticity. By systematically analyzing fracture behaviors and finite element simulation, we show that the stress field scalarization can resist the main shear band propagation and promote the formation of larger plastic zones near the pores, which undertake the homogeneous tensile plasticity. These results might give enlightenment for understanding the deformation mechanism and for further improvement of the mechanical performance of metallic glasses. indentation. Wang et al. found that surface mechanical attrition treatment could induce the intense structural evolution and then lead to the formation of gradient amorphous microstructures, which boosts the multiple shear banding and then obtains the superior tensile ductility 28 . However, whether there exists a method to realize the homogeneous tensile deformation in BMGs rather than the inhomogeneous deformation governed by SBs is seldom investigated. On the other hand, in the nano-scale, the deformation mechanism endures a transition from inhomogeneous to homogeneous deformation not relying on SBs, which results in tensile ductility and even necking 12,29 . From this point of view, monolithic BMGs could be intrinsic malleable and ductile under tension. Meanwhile, when the energy state of BMGs is tuned into the higher energy state of the super-cooled liquid, the large tensile plasticity could be also obtained 30 . Similarly, for oxide glass, it has been found that the nanowires show superplastic elongation larger than 200% under moderate exposure to electron beam 31 . These experimental results indicate that the plastic deformation carrier in BMGs may not solely reply on the SBs but a more microscopic deformation units. Many experimental results imply that BMGs are not completely homogeneous in nanoscale, and there exists a lot of dynamic or property defects of flow units (also termed as liquid-like zones or nanoscale SBs) [32][33][34] . These dynamic defects show low modulus, low viscosity and high atomic mobility. When the fraction of these dynamic defects increases (such as by rejuvenation treatment), the mechanical properties of BMGs such as the plasticity can be largely improved 35,36 . A question is then raised: Could we improve the tensile plasticity of BMGs by making the deformation units directly accommodate the plastic strain rather than the SBs? It is challenging to realize above idea considering the SB formation along the main shear plane. However, recent research on the densification and strain hardening under multiaxial loading 37 implies that the tensile plasticity may be got by complicating the stress field in BMGs. Meanwhile, the stress, which is equivalent to temperature, plays a similar effect on the viscosity, and the yielding could be considered as a stress-induced glass transition 38 . Thus, it is possible for the viscosity of the whole BMG decreases and then approaches the liquid-like state under certain applied stress mode, which leads to the near-homogeneous deformation in BMGs. The surface artificial defects such as the notch, the indentation printing and the laser shock peening have been verified to induce the stress concentration, which could be used to induce a complex stress field. However, these methods are not readily controllable and do not allow systematic variation of microstructural features, such as phase spacing and volume fraction. As a highly controllable and precise technique, laser surface texturing treatment (LSTT) has been adopted in welding and surface modification of BMGs as well as in cladding of engineering materials with amorphous coating 39,40 . Thus, LSTT could be an efficient tool to induce the surface treatments and then create a complex stress field. In present work, a series of designed LSTT pore arrays with different sizes are introduced into typical Zr-based BMG samples. The LSTT samples with different pore sizes display different tensile fracture behaviors and appreciable tensile plasticity is obtained when the size of pore is about 150 ~ 200 μ m. The finite element analysis simulations for different LSTT pore arrays were made to analyze the stress distribution evolution. A strategy of stress distribution scalarization is proposed to enhance the macroscopic tensile plasticity of BMGs. Results Laser surface texturing treatment (LSTT). We designed three kinds of LSTT pore arrays. We showed one of them in the below part of Fig. 1(b), and the as-cast sample for comparison in the above part of Fig. 1(b). Clearly, both the as-cast and LSTT samples are amorphous confirmed by X-ray diffraction in Fig. 1(c). The amorphous nature of LSTT sample can be maintained due to the ultrafast cooling rate of pulse laser during LSTT. Figure 1(d) displays the magnified part of LSTT sample circled by blue dashed rectangular in Fig. 1(b), and the pore arrangement is AB-like pattern [shown in the inserted graph of Fig. 1(b)], which is easier to motivate the formation of multiple SBs 23 . From Fig. 1(e,f), the ratio of the depth and the size of the pores is about 280:150 ~ 1.87 and lies in the range between 1 and 2, which meets our pore profile designing. It is noted that the LSTT samples are different from those of the laser-ablation surface layer in previous research 41 and the depth of the laser-heating influenced layer is only several hundred nanometers for metals considering the ultrashort laser interaction time (10 fs) 42 . This thin influenced layer does not arise the pronounced effect on the tensile mechanical behaviors compared with the molten layer of the several or hundreds of micrometers during the traditional laser-ablation. What is more, we selectively designed the laser texturing pore pattern on the surface and the shape of the pores were specially designed to the near-cylindrical profile [ Fig. 1(f)] to systematically analyze the stress field distribution near the pores by finite element simulation. Tensile plastic strain, elastic modulus and fracture strength. Figure 2(a) shows the typical tensile stress and strain curves of as-cast and LSTT specimens. For the as-cast specimen, no visible macroscopic tensile plasticity and the catastrophic fracture takes place when the tensile strain reaches about 2%; in sharp contrast, obvious tensile elongation appears in the LSTT sample marked with pattern C and the pore size of 150 μ m. For LSTT samples with the pore size of 42 and 85 μ m (pattern A and B), the visible nonlinear tensile stress-strain behavior also appears. The enlarged tensile stress and strain curves corresponding to the parts circled by the green, magenta and blue dashed rectangular circles are also shown in Fig. 2(b). One can clearly see that the nonlinear plastic deformation starts when the tensile strain is ~0.0195 and the tensile plastic strain ε p is only about 0.11% for the LSTT sample A. For LSTT sample B, the starting tensile strain of plastic deformation decreases to 0.0164 and the ε p increases to 0.19%. For LSTT sample C, the starting tensile strain of plastic deformation decreases to 0.0128 and the ε p increases to 0.51%. The above results indicate that ε T depends strongly on the pore geometry. In addition, no serrated flow in the plastic part of the stress-strain curve of LSST sample C in Fig. 2(b), which is the direct hint of SB-governing plastic deformation in BMGs 26,27 . The nonlinear plastic part in the stress and strain curves is very analogous to the tensile deformation in the microscale or nanoscale BMGs 12 , which indicates that the homogeneous plastic deformation process may take place within the LSTT BMGs. With the increase of the LSTT pore size, the tensile plastic strain ε T increases and the elastic modulus E, fracture strength σ f conversely decreases from Fig. 2(b). The values of ε T , E and σ f with various LSST pore sizes are included in Table.1 and shown in Fig. 2(c). The evolution of ε T and E, σ f displays the inverse changing trend with the increase of the pore size, which is consistent with previous research 23 . The surface pore array is actually considered as the the second soft phase and the increase of proportion of the surface pores leads to the decrease of E. Although σ f decreases about 30% compared to the as-cast sample, ε T increases to 0.51% from almost zero of as-cast sample. The above results indicate that to some extent we can tune the tensile plastic deformation ability by designing the LSTT pore stacking. Fracture angle and fracture morphology. The LSTT treatment also induces marked change in fracture angle and morphology as shown in Fig. 3. The as-cast sample fails by a single main shear fracture, with a shear fracture angle of ~50.9°, which is consistent with previous research [43][44][45] . The fracture surface morphology is the typical tensile fracture morphology of firework-like patterns consisting of the core and the radial vein-like pattern in the first pictures of Fig. 3(b,c). This indicates that the normal tensile stress controls the fracture progress. In contrast, the LSTT samples exhibit the larger fracture angles than that that of as-cast samples, and the fracture angle of pattern A, B and C are 51.5°, 55.5°, and 62.9°, respectively [see Table 1], which implies that the LSTT pore array twists the propagation direction of main SBs. Analogous to as-cast sample, the LSTT sample A with smaller pore size displays the similar radial-like pattern with smaller size, which indicates the influence of the pore arrays starts to work [second pictures of Fig. 3(b,c)]. For the LSTT sample B, the fracture surface displays a vein-like pattern and river-like pattern [third pictures of Fig. 3(b,c)], which is the typical fracture pattern in compression deformation process where the compression and shear stress play the dominant role during fracture. These results suggest the fracture mode has a transition from the uniaxial tensile fracture to compression-like fracture with the change of the pore array and size. For the LSTT sample C, the dense micro-scale cone-sharped structures with the size of 7.5 μ m appear in the central part between the two opposite surface pores [marked by green dashed circle in the fourth picture of Fig. 3(b)], which only exists in the microscopic BMG samples induced by the size effect such as the micro-scale foils 46 and the nanoscale samples 29 . These unique cone-sharped structures remind us of the homogeneous tensile fracture morphology in supercooled liquid state of BMGs 47 and the central part between the two opposite surface pores seems like the liquid state. Previous research 23,24,37 have shown that constraints induce the stress concentration to activate the formation of multiple SBs. The SB dominated fracture mode usually express the vein pattern on the main fracture surface 48 . This unexpected unique cone-sharped structures indicates that the fracture mode transition occurs from the usual heterogeneous plastic deformation mode via shear banding to homogeneous deformation in BMGs. The evolution of the fracture angle, the fracture morphology and mode with the LSTT pore size is displayed in Fig. 4 based on the data of Table 1. Stress field distribution of LSTT samples with different pore sizes D. The finite element simulations are adopted to provide explanations for the reduction in fracture strength and appearance of the homogeneous tensile plastic deformation. The numerical results of three LSTT samples (three different pore sizes of 50, 100, and 150 μ m) with the elastic strain 2% are displayed in Fig. 5(a-c), in which the elastic modulus of 78. 41 GPa and Poisson's ratio of 0.377 were used for the Zr-based BMGs 8 . Figure 5(a) shows the stress distribution field for LSTT sample with D = 50 μ m. One can see that most external stress is undertaken by BMG matrix and there appears the stress concentration in the regions near the LSTT pores from both the plan and cross-sectional view. The influence of the LSTT pore is only localized in the regions near pores and the stress field is analogous to that of the as-cast sample. Thus, the fracture features such as the fracture strength, the fracture angle and the fracture morphology do not much change compared to the as-cast ones. When the D increases to 100 μ m, the stress field distribution is markedly different in Fig. 5(b). The stress concentrated regions near the pores become bigger, and start to form the grid-like stress concentration zone by hand-in-hand from the plan view. The average stress value near the pores is comparable to the stress value of BMG matrix and the grid-like stress concentration zone starts to carry more the external stress, which indicates that the influence of the LSTT pores has already competed with that of the BMG matrix. In the cross-sectional view, the central parts between the opposite pores undertake the larger stress than BMG matrix and the central parts between the adjacent pores undertake the smaller stress, which produces a compression-like stress field. Thus, this comprehensive stress field disturbs the usual deformation process along the main shear plane and twists fracture angle away from the normal value (~50°). However, this comprehensive stress field does not change the heterogeneous deformation mode via the main SBs in Fig. 5(b) and the main fracture morphology is the vein-like pattern governed by the tensile shear mode. When the D further increases to 150 μ m, the regions both near the pores and between the opposite pores firstly reaches yielding compared to BMG matrix and the grid-like stress concentrated regions grow larger. These stress concentrated zones superimpose together and form the yielding zone, in which BMG enters into the liquid-like state and expresses the homogeneous flow behaviors 49 . From Fig. 5(c), one can see that the influenced zones of the LSTT pores has exceed the BMG matrix and the deformation and fracture mode transition happens from the tensile shear fracture to the homogeneous plastic deformation fracture mode. This homogeneous plastic deformation in mesoscopic scale arises the formation of the microscopic cone-sharped structures on the fracture surface of LSTT sample C in Fig. 3. Stress field evolution of LSTT sample with D = 150 μm in different tensile strains. We also studied the stress field evolution of the sample with D = 150 μ m under different tensile strains (0, 2%, 4% and 6%) to understand the evolution of the stress field during tensile deformation in Fig. 6. One can clearly see that the stress concentration should start to take place in the regions near pores and the BMG matrix barely sustains the loading. With the increase of the strain to 2%, the stress-concentrated regions connect each other and form a complex grid-like stress field. The influence of the grid-like stress field plays a dominant role in the following tensile deformation. When the tensile strain reaches 6%, the influenced zones of the grid-like stress field expand to the whole region between the opposite pores from both the plan view and the cross-sectional view. Especially, from the cross-sectional view, the central regions between the opposite pores have entered into the yielding state compared to the BMG matrix. These regions break the main shear plane of the brittle fracture mode without tensile plasticity and lead to the macroscopic tensile plastic deformation in LSTT BMG samples. Discussions Above experimental results and finite element analysis demonstrate that the identical Zr-based BMG specimens with different LSTT pore arrays display quite different tensile fracture behaviors. Under uniaxial tension, applied tensile stress is uniform and it is easier to form a single main SB along the main shear plane, leading to the rapid propagation of SB and the followed brittle failure. For the LSTT samples in this work, the complex stress field (compressive shear stress and tensile shear stress) induced by the LSTT pore array plays a similar role of the second soft crystalline phases 21,22 in activating the production of stress concentrated zones. This complex stress field leads to a complex plastic deformation mechanism in LSTT samples, i.e. the mesoscopic homogeneous plastic deformation near the LSTT pores and the heterogeneous shear banding governed deformation. Thus, the whole stress field is disrupted by the pore array induced complex local stress field, and this effect is equivalent to the transition of a single vectored stress to a multiaxial vectored field, i.e. the stress field scalarization. From this view, stress field scalarization makes the uniaxial tension stress field transform into the multi-axial complex stress field, and then prevents the fast propagation of the main SB and promote the production of the mesoscopic yielding zone, which enhances the tensile ductility for BMGs. Previous works [32][33][34][50][51][52][53] demonstrated that BMGs is heterogeneous in nano-scale, which consists of flow units and elastic matrix. Upon external loading, the flow units behave like inelastic inclusions and give birth to local plastic events also known as shear transformation zones, which closely correlates with various mechanical behaviors. Based on the flow unit image, the SB can be considered as the assembling consequence of many flow units along the main shear plane. Thus, to clearly understand the physical deformation mechanism of the LSTT BMG samples, a phenomenological picture of stress field scalarization based on the flow units image and the finite element analysis is displayed in Fig. 7. Under uniaxial tensile stress, the stress field displays a near-parallel distribution along the external loading direction for the as-cast sample [left part of Fig. 7(a)]. For this kind of stress field distribution, the total effect of the internal stress field is equivalent to the tensor stress. And it is the tensor stress that directly leads to the formation of a single main SB along the main shear direction, which is prone to induce the catastrophic fracture. In contrast, for LSTT samples, the stress field is twisted in the regions near LSTT pores and the tensor stress with parallel distribution is scalarized [left part of Fig. 7(b)]. The scalarized stress field directly arouses the stress concentration in the regions near LSTT pores, which disrupts the flow units arrangement along the main shear plane. Thus, not only the flow units near the main shear plane are activated, but also the hidden flow units away from the main shear plane can also be excited. Those activated flow units aggregate into the mesoscopic yielding zone near LSTT pores when the D reaches the certain value (150 μ m in this work). Previous research suggests that the stabilization of SB propagation require that the typical length of the artificial heterogeneous microstructures D < R P 25,27,28,43 . R P is the intrinsic crack tip plastic zone radius, and R P ~ (1/2π ) (K IC /σ y ) 2 (K IC is fracture toughness and σ y is the yield strength). For Zr-based BMGs, the value of R P is about 150 μ m. In our case, D is the size of LSTT pores. As is shown in Figs 5 and 6, when the D < R P (pore size is about 50 μ m), the tensile plasticity is just increased to be ~0.1% and the tensile nominal stress still dominates the fracture process. When D is about 100 μ m comparable to R P , the deformation mode becomes different and the shear stress starts to play the dominant role. When D reaches about 150 μ m, the homogeneous plastic deformation near the LSTT pore starts to become obvious, which induces the significantly improvement of the tensile plasticity. This suggests that D/R P is actually the prominent factor for controlling the stress distribution, and thereby, the fracture strength and tensile plastic strain in BMGs. We note that the depth of the LSTT pores is an important controlling parameter. The core idea of improving ductility of BMGs by LSTT technique is to tune the stress field distribution for activating more flow units to undertake the external loading. Thus, this work is actually one of a series of methods for stress field controlling engineering in improvement of mechanical properties. The introduced LSTT pore array with the same pore size and arrangement may produce a different stress field distribution when the depth of the pores varies and then leads to a distinct mechanical behavior. Furthermore, the relative thickness of the LSTT pore compared to the thickness of BMG samples may be a key factor when the size of pore is be comparable to the thickness of sample. Therefore, various LSTT patterns could be applied to obtain the corresponding stress field distribution based on the specific BMG sample with wanted mechanical properties It is worth mentioning that our strategy is significantly different from the previous methods for enhancing the tensile plasticity by promoting multiple SBs 25,27,28,43 . In our case, the carrier of the tensile plasticity is the mesoscopic plastic zone near LSTT pores consisting of flow units rather than multiple SBs. Before the main SB propagates, the regions near the LSTT pores have transformed from the solid-like state to liquid-like state under the compression-shear complex stress field. Although there is only 0.51% tensile plastic strain, the larger macroscopic tensile plasticity might be obtained by further optimizing the profile and spatial distribution of the LSTT pore array, which is our further work. Actually, the methods of introduction of the second crystalline phase, the artificial surface defects and the notches into BMGs 26,28,29,48 for enhancing the tensile plasticity can also be regarded as other forms of stress field scalarization Conclusions A stress field scalarization strategy is proposed to improve the macroscopic tensile plasticity of BMGs, and the method is proved to be feasible experimentally by designing the laser surface texturing treatments on the surface. The introduced surface pore array can activate the formation of the microscopic plastic zones in the regions near LSTT pores and then connect into a mesoscopic zone when the pore size meets the certain conditions. As a result, the mesoscopic zone undertakes the external stress and then arise the macroscopic tensile plasticity. Under the complex stress field environments, the BMGs display the totally different mechanical behaviors compared to the uniaxial stress field, which provides the in-depth understanding of physical mechanism in different external environmental conditions. Due to the superior forming ability of BMGs within supercooled liquid region, the present strategy can also be readily realized by introduction of various artificial defects on the surface using the superplasticity of the BMG in its supercooled liquid state. Methods Metallic glasses and the specimen preparation. Zr-based BMG samples with a nominal chemical composition of Zr 64.13 Cu 15.75 Ni 10.12 Al 10 were prepared by induction melting a mixture of pure metal elements and then casting into Cu mold to form plate shape specimens with dimensions of 1 × 10 × 50 mm 3 . The glassy nature of BMG samples was confirmed by x-ray diffraction (XRD) using a BRUKER D8 ADVANCE diffractometer with Cu K α radiation source and differential scanning calorimetry (DSC) performing under a purified argon atmosphere in a Perkin-Elmer DSC-7. The as-cast BMG plates were polished using 200, 600 and 1200 grit SiC paper successively to remove the thin crystalline surface layer caused by interaction with the mold. The final thickness of polished plates was reduced to about 0.7 mm, with the upper and lower surfaces being parallel. Dog bone-like specimens for tensile tests with cross section dimensions of 0.7 × 7.0 mm 2 and a total length of 42 mm were cut from the BMG plates using electric spark line cutting machine and the gauge dimension is 0.7 × 3 × 22 mm 3 . All tensile specimens were polished with 1.5 μ m diamond sandpaper to get rid of corrosion pits induced by electric spark line cutting. Laser surface texturing treatment. Before tensile tests, the polished dog bone-like specimens were pre-treated by the laser surface texturing treatment technology, LSTT, in the central gauge part and the LSTT set-up sketch is shown in Fig. 1(a). A Picosecond laser TruMicro 5025 was used. The laser produces a beam with a Gaussian energy distribution and operates at 515 nm with a maximum pulse energy of 150 μ J, a pulse duration of 0.01 ns and a frequency of 800 kHz. A scanner head, combined with the laser, allows to reach a high precision during texturing. The BMG specimen was fixed on the movable platform (including the cooling water system with the temperature range between 5 °C and 23 °C). The surface texture can present various forms like streaks, holes and other geometries. In this work, texturing was done in the form of circular pores. After LSTT, the surface micro-pores were then observed by scanning electron microscopy (SEM) conducted in a Philips XL30 instrument and white light interference profiler (BRUKER, Coutour GT). Various laser-induced pore array patterns with different diameters and depths were designed on the tensile specimens. In the practical industrialized applications, the improvement of the mechanical and physical properties by LSTT are largely influenced by the profile (shape, size, density and depth) of pores induced by LSTT 54 . To individually study the LSTT effect on the mechanical properties, we controlled the ratio of the depth and the size of the pores between 2:1 and 3:1 by optimizing the laser parameters and kept the identity of the pores in the spatial arrangement with different sizes. Tensile mechanical tests. Uniaxial tensile tests were conducted on the as-cast and LSTT BMG specimens with a constant quasi-static strain rate of about 1 × 10 −4 s −1 under an INSTRON ElectroPuls E10000 All-Electric Test Instrument at room temperature. Strain was precisely and directly measured based on the sample gauge length using non-contacting video extensometer (INSTRON). At least three specimens were measured to ensure that the results were reproducible. The fracture features, such as newly generated tensile fracture surfaces, fracture side surface morphology and fracture angle, were observed by the SEM. Finite element simulation. A series of finite element simulations were carried out to probe the mechanical mechanism giving rise to the dramatic tensile ductility enhancement. The dimensions of the model system and pores were designed to be identical to the experiment values to conveniently analyze the difference between the simulation and the experimental results. The number of pores was reduced in tensile direction for saving computing time without changing the final simulation results. Specially, we varied the size of pores to investigate the effect of the heterogeneity induced by the LSTT pores on the tensile mechanical behaviors. Tensile deformation was introduced by applying an X displacement on the right boundary and the left was forbidden to move in X direction, as was shown in Fig. 5. To have an insight into the evolution of the stress field, displacement was imposed by increasing steps 50, 100, and 150 μ m, corresponding to the nominal strain 2%, 4%, and 6% respectively. In the model, the material were treated as isotropic elastic solids, Yong's modulus and Poisson's ratio of the BMG were taken to be 78.4 and 0.377, respectively. Previous studies 55,56 have shown that the von Mises criterion is adequate for describing the yield response for amorphous alloys. Therefore, for ease to compare the results among the different types of samples, the von Mises criterion was chosen to be used in the present simulations. The basis set of finite element simulations was chosen to be a four-node linear element. The finite element program, Abaqus (version 6.10, Dassault Syste'mes Simulia Corp., Providence,RI, USA), was employed for the calculation in this work.
2016-05-12T22:15:10.714Z
2016-02-23T00:00:00.000
{ "year": 2016, "sha1": "334cbc3cad86b45789c586a774f557c8f715b2e9", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/srep21929.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "334cbc3cad86b45789c586a774f557c8f715b2e9", "s2fieldsofstudy": [ "Engineering", "Materials Science", "Physics" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
256438379
pes2o/s2orc
v3-fos-license
Use of β‑blockers and risk of age‑related macular degeneration among hypertensive patients: An insight from The National Health and Nutrition Examination Survey Although age-related macular degeneration (AMD) is the leading cause of legal blindness, the treatment methods for AMD are limited. The aim of the present study was to examine the association between oral β-blockers (BBs) and the risk of developing AMD among hypertensive patients. For this purpose, a total of 3,311 hypertensive patients from the National Health and Nutrition Examination Survey were included in the study. The use of BBs and treatment duration data were collected using a self-reported questionnaire. AMD was diagnosed by gradable retinal images. Multivariate-adjusted survey-weighted univariate logistic regression was used to confirm the association between the use of BBs and the risk of developing AMD. The results revealed that the use of BBs exerted a beneficial effect (odds ratio (OR), 0.34; 95% confidence interval (95% CI, 0.13-0.92; P=0.04) in late-stage AMD in the multivariate adjusted model. When the BBs were classified into non-selective BBs and selective BBs, the protective effect in late-stage AMD was still observed in the non-selective BBs (OR, 0.20; 95% CI, 0.07-0.61; P<0.001). After accounting for treatment duration, long-term treatment with BBs (>6 years) was also found to reduce the risk of late-stage AMD (OR, 0.13; 95% CI, 0.03-0.63; P=0.01). In late-stage AMD, the long-term use of BBs was beneficial for geographic atrophy (OR, 0.07; 95% CI, 0.02-0.28; P<0.001). On the whole, the present study demonstrates that the use of non-selective BBs exerted a beneficial effect against the risk of late-stage AMD among hypertensive patients. Long-term treatment with BBs was also associated with lower risk of developing AMD. These findings may provide novel strategies for the management and treatment of AMD. Introduction age-related macular degeneration (aMD) is an eye disease whose incidence rate increases with age and leads to decreased central vision (1). aMD is the most common cause of legal blindness among the population aged >50 years in the western world (2). In the USa, the number of cases of legal blindness caused by aMD are greater than those caused by glaucoma, cataract and diabetic retinopathy in combination (3). aMD may have a severe impact on the quality of life of affected individuals. It is associated with an increased risk of functional disabilities, negative effects on daily activities, an increased risk of depression and a higher risk of developing cognitive impairments in older adults (4)(5)(6). aMD has a variety of classification criteria. It is traditionally classified into the early and late stage based on color fundus images (7). Early-stage aMD is characterized by large drusen, retinal pseudo-drusen and pigmentary abnormalities. By contrast, late-stage aMD is divided into neovascular aMD (naMD) and geographic atrophy (Ga). although aMD is the leading cause of legal blindness, the treatment methods available for late-stage aMD, particularly naMD, are limited. the primary treatment for naMD is based on the inhibition of vascular endothelial growth factor (VEGF) (8). although several complement inhibitors are undergoing therapeutic clinical trials (ClinicalTrials.gov Identifiers: NCT05230537, NCT03364153 and NCt04465955), there are currently no effective therapeutic methods available for Ga (9). In addition, there is also a lack of effective treatment strategies for preventing and delaying the progression of early-to late-stage aMD (7). Hence, the development of novel therapeutic agents for aMD is mandatory. β-adrenergic receptor (β-ar) blockers (BBs) are medications widely used in the treatment of heart diseases, such as hypertension, arrhythmias and heart failure. Previous Use of β-blockers and risk of age-related macular degeneration among hypertensive patients: An insight from The National Health and Nutrition Examination Survey preclinical studies have demonstrated a protective role of BBs against neovascularization. For example, propranolol treatment has been shown to reduce 50% of neovascularization in laser-induced choroidal neovascularization by reducing the release of VEGF (10). reduced corneal neovascularization with downregulated levels of VEGF and cytokines was also observed following treatment with timolol in a murine corneal suture model (11). Hence, several clinical research studies have explored the possible association between BBs and AMD (12)(13)(14)(15)(16)(17)(18)(19). However, some conflicting results have been reported. the studies by Klein et al (17) and Yeung et al (19) reported an increased risk of developing naMD in patients treated with BBs compared to those not treated, while other studies, such as those by traband et al (12), Kolomeyer et al (13), thomas et al (15), Song et al (16) and Davis et al (18) found have no association between the use of BBs and naMD development. Montero et al (14) suggested a beneficial effect of BBs against naMD. Moreover, the majority of studies have focused on naMD, and not on Ga and early aMD. according to their selectivity for β-ar, the BBs used in clinical practice are divided into two main categories: Selective and non-selective BBs (20). the majority of research focuses on non-selective BBs. the association between the development of aMD and the use of selective BBs has not been reported to date, at least to the best of our knowledge. thus, the present study investigated the association between different types of BBs and the risk of developing different stages of aMD using the data from the National Health and Nutrition Examination Survey (NHaNES). Patients and methods Data source and ethics approval. all data used in the present study were obtained from the NHaNES, which is a cross-sectional survey administered by the Centers for Disease Control and Prevention's National Centre for Health Statistics (NCHS) since 1999. It reflects the national status of health and nutrition in the USa. Since the present study employed de-identified information from the NHANES database approved by the institutional review board of the NCHS, the Ethics Committee of the Second Affiliated Hospital of Wenzhou Medical University (Wenzhou, China) granted the study an exemption from ethical review. Participants enrolled. as the retinal examination was only available in two NHaNES cycles (2005-2006 and 2007-2008), participants were selected from these two cycles. Since BBs are mainly used in patients with hypertension, all the hypertensive participants selected were >40 years of age. Participants who responded 'Yes' to the question 'have you ever been told by a doctor or other health professional that you had hypertension, also called high blood pressure?' and those with an average systolic blood pressure ≥140 mmHg or average diastolic blood pressure ≥90 mmHg in the examination were defined as being hypertensive. In total, 20,497 participants were included from the two NHaNES cycles; 13,416 participants were excluded as they were aged <40 years, among whom 4,032 had hypertension. A total of 3,023 participants were finally enrolled in the present study, for whom a complete retinal examination had been performed. A flowchart of the process used for the inclusion of participants is presented in Fig. 1. Classification of AMD. In the NHaNES database, aMD was classified into three stages as follows: No AMD, early-stage AMD and late-stage AMD. The classification criteria were the following: Any large (≥125 µm) drusen, or retinal pseudodrusen or pigmentary abnormalities in the retinal examination were defined as early-stage AMD; any GA or exudative neovascularization in the retinal examination was defined as late-stage aMD. those without any signs of early-or late-stage aMD in the retinal examination were considered as having no aMD. If both eyes were affected by aMD, data from the eye with the more severe stage of aMD were used. Use of BBs and treatment duration. The use of BBs was identified according to the self-reported prescription medications questionnaire (https://wwwn.cdc.gov/nchs/nhanes/continuousnhanes/questionnaires.aspx?BeginYear=2005). Non-selective BBs included propranolol, carvedilol, nadolol, sotalol, pindolol, labetalol, penbutolol and timolol. Selective BBs included nebivolol, metoprolol, atenolol, bisoprolol, acebutolol, and betaxolol. the duration of the use of BBs was also obtained from the questionnaire, which was divided into four quartiles as follows: First quartile, ≤2 years; second quartile, 2-4 years; third quartile, 4-6 years; fourth quartile, >6 years). The long-term use of BBs in participants was defined as a BB treatment duration of >6 years. Other variables examined. other variables included demographic characteristics, a history of comorbidities, health-related behaviors and the use of anti-hypertensive drugs. Data regarding age, sex, race, economic status and education level were collected under the category of demographic characteristics. a history of stroke, heart diseases, cancer or malignancy, diabetes mellitus, thyroid issues, glaucoma and diabetic retinopathy were collected with the history of comorbidities category. Health-related behaviors provided information about smoking and alcohol consumption. the use of anti-hypertensive drugs included the use of BBs and treatment duration, renin-angiotensin system inhibitors (raSIs), calcium channel blockers (CCBs) and diuretics. Participants with coronary heart disease, heart attack, angina pectoris or congestive heart failure were defined as having a history of heart disease. Participants with anemia or chronic bronchitis were defined as having a history of lung disease. Participants with glaucoma were identified by cup-to-disc ratios >0.6 in one eye. Participants with diabetes mellitus were identified using the following criteria: i) Fasting plasma glucose levels ≥126 mg/ml; ii) 2-h plasma glucose levels ≥200 mg/dl; iii) HbA1c ≥6.5%; and iv) answering 'Yes' to the question of 'have you ever been told by a doctor or health professional that you have diabetes or sugar diabetes?'. Participants with diabetic retinopathy were identified by any signs of retinopathy (>14) on fundus images and by a diagnosis of diabetes mellitus. Dara regarding body mass index (BMI) and waist circumference had been measured during a physical examination at the time of the survey. Data on triglycerides (tGs), red blood cells (rBCs), white blood cells (WBCs), high-density lipoprotein (HDL) and platelet (PLT) levels of each participant had been obtained through a laboratory examination. Statistical analyses. Data were analyzed using a survey package in r software (version 4.1.3; http://r-survey.r-forge.r-project. org/survey/) with sampling weight following the complex sample design of NHaNES. Continuous variables are presented as weight-adjusted mean ± standard error, and qualitative variables as weight-adjusted proportion ± standard error. aNoVa was used for the comparisons of means among multiple groups followed by Tukey's post hoc test. Survey-weighted univariate logistic regression was used to examine the association between different types of BBs and the various stages of aMD. a generalized additive model and natural cubic spline were used to explore the non-linear association between BB treatment duration and the risk of developing aMD. a multivariate model adjusted for age, sex, race, stroke history, heart disease history, thyroid disease history, glaucoma, rBCs and HDl was applied. the results are presented as odds ratios (ORs) with 95% confidence intervals (95% CIs). the correlation between the use of BBs and the prevalence of AMD was calculated using Spearman's correlation analysis. the correlation between the use of non-selective BBs and the prevalence of aMD was also calculated using Spearman's correlation analysis. A value of P<0.05 was considered to indicate a statistically significant difference. Results Characteristics of the participants enrolled. In total, 3,311 participants were enrolled in the present study. the participants with aMD tended to be older, of Caucasian or african-american origin, were married, had higher BMI and HDl levels, had low levels of rBCs, and had a history of heart disease, stroke and thyroid disease (table I). In addition, a significant difference was found in the use of RASIs and BBs between participants with aMD and those with no aMD (table I). Use of BBs and the risk of AMD in the hypertensive population. the association between the use of BBs and the risk of developing aMD was explored among all the participants. a significant correlation was found between the use of BBs and AMD (Rho=0.06, P<0.05) (Fig. 2). BB treatment increased the risk of developing aMD in the hypertensive population (or, 1.49; 95% CI, 1.21-1.84; P<0.001) (Table II). When the BBs were categorized into non-selective and selective BBs, a significant association was found between the selective BBs and aMD (OR, 1.59; 95% CI, 1.29-1.97; P<0.001) (Table II). By contrast, no correlation was found between the use of non-selective BBs and AMD (Rho= 0.07, P>0.05) (Fig. 3). However, no association was found between the use of BBs and aMD after adjusting for age, race, stroke history, heart disease history, thyroid disease history, glaucoma, rBCs and HDl (table II). Use of BBs and the risk of early-and late-stage AMD in the hypertensive population. As the use of BBs did not have a significant effect on the risk of developing aMD following multivariate adjustment, the present study further explored whether the use of BBs was related to the different stages of aMD. of note, there was no significant association between the use of BBs and the risk of early-stage aMD (table III). Furthermore, no association was found after classifying the BBs into non-selective and selective BBs in the adjusted model. (table III). However, the BBs exerted a beneficial effect (OR, 0.34; 95% CI, 0.13-0.92; P=0.04) against late-stage AMD in the multivariate adjusted model (table III). the protective effect for late-stage aMD was observed in the non-selective BBs (or, 0.20; 95% CI, 0.07-0.61; P<0.001) (Table III). By contrast, the selective BBs were not found to be significantly associated with late-stage AMD (OR, 0.46; 95% CI, 0.18-1.15; P=0.09) (Table III). BB treatment duration and risk of AMD. the aforementioned results indicated the protective effect of BBs against late-stage aMD. However, the potential cumulative effects of time were not previously considered in the literature, at least to the best of our knowledge. Hence, in the present study, the association between the use of BBs and the risk of developing aMD was further investigated. It was found that BB treatment duration had no association with the risk of developing aMD (OR, 0.98; 95% CI, 0.94-1.02; P= 0.231; R 2 = 0.114) (Table IV). In addition, no liner association was found between BB treatment duration and aMD using line regression analysis (or, 0.998; 95% CI, 0.996-1.001; P= 0.275; R 2 = 0.061) (Table IV). a generalized additive model and natural cubic spline were introduced to examine the non-linear association. thought the smoothing splines curve, the predisposition to aMD exhibited a trend to first increase, and to then decrease with the increasing treatment duration of BBs (Fig. 4). thus, the BB treatment duration was we divided into four quartiles as follows: ≤2 years, 2-4 years, 4-6 years, and >6 years. Compared to the patients not on BB treatment, a decreased risk of developing aMD was only found in the last quartile of BB treatment duration (OR, 0.65; 95% CI, 0.43-0.98; P= 0.04; r 2 = 0.493) ( Table V). The other groups did not exhibit a significant difference compared with the patients not on BB treatment (table V). Similar results were found when examining the association between BB treatment duration and late-stage aMD. only the fourth quartile of BB treatment duration exhibited a significant association with the risk of late-stage aMD compared with the BB non-users (or, 0.13; 95% CI, 0.03-0.63; P= 0.01) (Table V). Furthermore, with the increasing duration of BB treatment, a significant decrease in the magnitude of associations with the risk of late-stage AMD was observed (P for trend= 0.048) ( Table V). The other quartiles did not exhibit a significant association with the BB non-users (table V). By contrast, for the early stage of aMD, no significant difference was found in BB treatment duration compared with the BB non-users (table V). Long-term use of RASIs and different subtypes of AMD. the aforementioned results suggested that a BB treatment duration >6 years may decrease the risk of developing aMD. therefore, the long-term use of BBs was defined as a BB treatment duration >6 years the present study. Since the previous assessments only focused on naMD without considering Ga and table II. association between aMD and the use of BBs. early-stage aMD, the long-term use of BBs was investigated in order to assess its influence on the different subtypes of AMD. two major subtypes of early-stage aMD were mainly considered in the present study, including pigmentary abnormalities and soft drusen. However, there was no association between the long-term use of BBs and the two subtypes of early-stage aMD (table VI). In late-stage aMD, the long-term use of BBs was a protective factor for Ga (or, 0.07; 95% CI, 0.02-0.28; P<0.001) (Table VI). However, it was considered that the result of the long-term use of BBs for Ga was not reliable as the number of Ga cases was very small. there was also no significant association between the long-term use of BBs and naMD (table VI). Discussion In the present study, although there was insufficient evidence for the exact association between the use of BBs and aMD, a decreased association was found between the use of BBs and late-stage aMD among hypertensive participants from NHaNES. the use of BBs, particularly long-term BB treatment, was found to exert a protective effect against late-stage AMD. Even though a significant protective effect of the long-term use of BBs against Ga was found, due to the limited number of number cases of Ga in the NHaNES database, the outcome cannot be considered reliable. However, this result may provide the basis for the future clinical use of BBs and may guide future treatment strategies patients with aMD. Several experimental studies have reported that β-ar plays a critical role in the development and progression of aMD, and suggest that BBs may be prophylactic drugs for naMD. For example, propranolol was found to reduce retinal neovascularization and vascular leakage and was considered to downregulate retinal VEGF and insulin-like growth factor 1 expression (21). Carvedilol has also been demonstrated to modulate the expression of VEGF and hypoxia-inducible factor-1α induced by hypoxia (22). Dal Monte et al (23) also found that β-ar activation increased the expression of VEGF by increasing nitric oxide (No) production, while β-ar blockers exerted the opposite effect by decreasing No levels. BBs can reduce neovascularization. In addition, they can also improve the survival of retinal neurons. Betaxolol has been shown to exert neuroprotective effects in the retina by decreasing the expression of neuronal nitric oxide synthase (24). Betaxolol also reduces the death of neurons, reducing the calcium ion influx and sodium ion influx (25,26). In summary, BB treatment has been shown to exert therapeutic effects against neovascularization, which is the main pathophysiological mechanism of naMD, and against the death of retinal neurons, which is the dominant mechanism of Ga (27,28). Early-stage aMD a late-stage aMD a ---------------------------------------------------------------------------------------------------------------------------------------------------- although preclinical studies have indicated that BBs may be an effective treatment for aMD, clinical research on aMD and BBs has not yielded ideal results. a positive outcome was reported in the retrospective study by Montero et al (14). they found that the need for bevacizumab injections was decreased in patients with naMD treated with oral systemic BBs compared to the BB non-users (14). However, that study was limited by small sample size. as hypertension is a risk factor for aMD, using participants not treated with BBs as the control group may possibly introduce confounding bias. a retrospective study involving the database of large national USa insurers found a opposite outcome (12). a comparator medication class with similar diseases was selected to address the bias. the effects of injections of anti-VEGF agents in hypertensive patients with BBs did not differ from those on hypertensive patients with CCBs (12). the aforementioned studies focused on the injection incidence. In comparison, other clinical studies have paid attention to the association between the risk of developing aMD and the use of BBs, and found negative results. For example, Davis et al (18) found no difference in the use of BBs between patients with Ga and wet aMD. thomas et al (15) also found there was no significant association between the use of BB and choroidal neovascularization in naMD. However, the Beaver Dam Eye Study (BDES) revealed opposite results (17). BB treatment was associated with an increased 5-year incidence of exudative aMD over a 20-year period. that study also had limitations, such as not considering BB treatment duration. Furthermore, two longitudinal studies on BB treatment duration and aMD were conducted to explore the association between BB treatment duration and naMD. Yeung et al (19) found that the continuous use of BBs was associated with a higher risk of naMD compared with non-users. By contrast, Kolomeyer et al (13) reported that patients using BBs were significantly less likely to develop naMD at 90 and 180 days than patients using CCBs. the aforementioned studies concentrated on naMD, while the study by Song et al (16) focused on Ga; they found no significant association between BBs and nAMD. Several researchers have examined the association between BBs and aMD. although several of the outcomes were negative, further studies are required to fully elucidate the association. In the present study, the use of BBs was not found to be significantly associated with early-stage AMD and nAMD compared with non-users, reflecting the conclusions of some studies (12, 13,15,18). However, long-term treatment with non-selective BBs had a protective effect against late-stage aMD. Since β-ar in the retina has an age-related overexpression and a super-sensitivity effect, it is possible that continuous BB treatment exerts a protective effect against aMD (29). However, the positive effect identified in Ga in the present study was different from the study of Song et al (16), which found the use of BBs had no association with the incidence of Ga. there were some explanations accounting for this difference. on the one hand, the participants enrolled were different. In the present study, hypertensive patients not treated with BBs were set as the controls, while other studies did not consider hypertension (14,15,16,18). on the other hand, the BB treatment duration may differ, as it was not considered in other studies (12-18). thus, a prolonged BB therapeutic duration may result in a different outcome. the present study has certain strengths. Firstly, hypertensive participants were enrolled to avoid confounding bias. In addition, a number of confounding factors aside from hypertension were adjusted for in the multivariate analysis. Secondly, the effect of different categories of BBs was investigated, while the majority of previous studies (12 -16). only focused on non-selective BBs. Non-selective BBs block β1-ar and β2-ar, while selective BBs mainly inhibit β1-ar. However, all the subtypes of β-ar are expressed in retinal cells (30). the further examination of selective BBs in aMD is thus warranted. Herein, the association between the use of selective BBs and aMD was explored, revealing no significant association. Moreover, the association between the use of BBs and early-stage aMD was not explored in other studies (12)(13)(14)(15)(16)18,19). BDES reported no significant difference in the use of BBs and early-stage aMD. the results of the present study are in accordance with this outcome. thirdly, the present study concentrated on the duration of BB treatment. Although short-term BB treatment (duration, <6 years) had no effect on AMD, there was a significant trend for decreasing the magnitude of associations of late-stage aMD with the increasing treatment duration of BBs. However, there are limitations to the present study which should be mentioned. Firstly, all the data used were derived from NHaNES, which is a cross-sectional study. the inherent flaws of cross-sectional design studies are unavoidable. Secondly, the use of BBs before or after aMD cannot be confirmed. thirdly, the interactions among different anti-hypertensive drugs were not considered, which could lead to an overestimation of the protective effects of BBs. In conclusion, the present study demonstrates that although the use of BBs did not affect early-stage aMD, the long-term use of BBs is a protective factor against the risk of aMD among hypertensive patients. However, the outcomes obtained need to be further validated in completely randomized or multi-center clinical trials involving the use of BBs and Ga. Acknowledgements Not applicable. Funding No funding was received. Availability of data and materials the datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. Authors' contributions WF, Dl and GS were involved in data collection. Jl, Yl, MC and HZ were involved in data analysis. Jl and Yl prepared the original draft of the manuscript. HZ was involved in the writing, critical reviewing and editing of the manuscript. Yl and HZ confirm the authenticity of all the raw data. All authors have read and approved the final manuscript. Ethics approval and consent to participate As the present study employed de-identified information from the NHaNES database approved by the institutional review board of the NCHS, the Ethics Committee of the Second Affiliated Hospital of Wenzhou Medical University granted the study an exemption from ethical review. Patient consent for publication Not applicable.
2023-02-01T16:22:15.654Z
2023-01-30T00:00:00.000
{ "year": 2023, "sha1": "c1f35d7f22dfdf8ab2fe639d0560a8a27f18d51a", "oa_license": "CCBY", "oa_url": "https://www.spandidos-publications.com/10.3892/mi.2023.70/download", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "852e4ae19929d354e03d0ddf0bd1cf94d4c71cdb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
14273335
pes2o/s2orc
v3-fos-license
Heterogeneous dynamics of the three dimensional Coulomb glass out of equilibrium The non-equilibrium relaxational properties of a three dimensional Coulomb glass model are investigated by kinetic Monte Carlo simulations. Our results suggest a transition from stationary to non-stationary dynamics at the equilibrium glass transition temperature of the system. Below the transition the dynamic correlation functions loose time translation invariance and electron diffusion is anomalous. Two groups of carriers can be identified at each time scale, electrons whose motion is diffusive within a selected time window and electrons that during the same time interval remain confined in small regions in space. During the relaxation that follows a temperature quench an exchange of electrons between these two groups takes place and the non-equilibrium excess of diffusive electrons initially present decreases logarithmically with time as the system relaxes. This bimodal dynamical heterogeneity persists at higher temperatures when time translation invariance is restored and electron diffusion is normal. The occupancy of the two dynamical modes is then stationary and its temperature dependence reflects a crossover between a low-temperature regime with a high concentration of electrons forming fluctuating dipoles and a high-temperature regime in which the concentration of diffusive electrons is high. I. INTRODUCTION Recent experimental studies of hopping conductance in Anderson insulators showed striking non-equilibrium effects that persist for long times at low temperature. 1,2,3,4,5 These results support early theoretical predictions of a low-temperature glassy phase in interacting disordered electron systems in the strongly localized limit. 6,7 In this regime the essential physics is well captured by the Coulomb glass model that describes a system of interacting electrons hopping between randomly distributed sites that correspond to the localization centers of the singleelectron wavefunctions. 6,7,8,9,10,11 Although Coulomb glass models have been extensively studied during the last thirty years and their equilibrium properties are by now fairly well understood 6,7,8,9,10,11,12 much less is known about the properties of Coulomb glasses out of equilibrium. 13,14,15,16 In Reference 16 the off-equilibrium dynamics of the Coulomb glass was investigated by studying the scaling properties of the non-stationary correlation and response functions, a tool often used in the study of other glassy systems such as structural and spin glasses. 17,18,19,20 . It has recently been realized that further insights on the nature of the glassy phase can be gained from the analysis of dynamical heterogeneities. 21,22,23 This approach is particularly appealing in the context of Coulomb glasses since the low-temperature hopping conductance is dominated by the diffusion of carriers on a set of conducting percolation paths. 6,7 We expect interactions between the electrons to have strong effects on this type of motion since hops of individual carriers alter the effective random potential felt by the others and thus modify the structure of the percolating network. Fluctuation effects of this type were shown to lead to low-frequency 1/f -like noise in the conductance of Coulomb glasses. 24,25 In this paper we investigate the dynamical properties of the three dimensional random-site Coulomb glass model by kinetic Monte Carlo simulation using a realistic microscopic dynamics that favors the emergence of local effective constraints in the kinetics. In our simulations the system is initially quenched from infinite temperature to a working temperature T and its evolution in time is characterized for different values of T through the time dependence of the relevant correlation functions. One of our main observations is the appearance of a dynamical crossover from equilibrium dynamics to slow non-equilibrium dynamics at a temperature T g ∼ T c , where T c is the equilibrium freezing transition temperature of the model. 11 This crossover takes place even for relatively small system sizes and occurs at the temperature at which the equilibration time of the finite sample becomes much longer than the time-scale of the simulation. In this regime the time-dependent correlation functions exhibit slow relaxation and have aging properties. We found that the dynamics of the Coulomb glass is heterogeneous as observed in other glassy systems. 21,22 Heterogeneities can be characterized by examination of the evolution of diffusion fronts. A statistical analysis of the electron trajectories over a fixed time window shows that most carriers belong to one or the other of two groups. The first group is that of those electrons that diffused away from their initial position during the chosen time interval. The second group is that of the electrons that remained confined in relatively small regions in space during that time. An exchange of electrons between these two dynamical modes takes place as the system relaxes after a temperature quench. In the aging regime this exchange is very slow and the non-equilibrium excess of electrons with metallic hopping present in the system right after the quench decreases logarithmically with time. In this temperature range the diffusion is anomalous. We found that the mean squared displacement ∆x 2 (t) ∼ t η where the exponent is η < 1 and depends on the age of the system. In the equilibrium regime time translation invariance is recovered and we observe normal diffusion. However, the bimodal dynamical heterogeneity still persists. The occupancy of the two dynamical modes is stationary and its temperature dependence reflects a crossover from a low-temperature regime with a high concentration of electrons forming fluctuating dipoles to a high-temperature regime in which the concentration of diffusive electrons is high. The paper is organized as follows. Section II contains a description of the model and of our numerical method. In Section III we discuss the properties of the local-density autocorrelation function in and out of equilibrium. Section IV is devoted to the analysis of electron diffusion in the system. In Section V we study the statistical properties of the diffusion fronts and show that they provide evidence for the existence of heterogeneous transport in the system. Finally, we summarize the conclusions of our study in Section VI. II. THE MODEL The Hamiltonian of the classical three dimensional Coulomb glass is 7 where R i denotes center of localization of a singleparticle localized electronic state, ϕ i the energy of the state and e and κ are the electron charge and the medium's dielectric constant, respectively. Strong on-site correlations limit the occupancy of the electronic states to n i = 0, 1. Charge neutrality is assured by a uniform compensating positive charge density K = (1/N ) i n i . The positions R i of the localized states and their energies ϕ i are both random variables. However, it is common practice to study two complementary simplified versions of the model. These are respectively referred to as the lattice model and the random site model in Reference 15. In the lattice model the sites are assumed to lie on a regular cubic lattice and only the randomness in the energies ϕ i is taken into consideration. Conversely, in the random site model 10,11 only the positional disorder is taken into account and ϕ i = 0. It is standard practice to concentrate on the particle-hole symmetric case K = 1/2 for which the analysis of the results is simpler. While it has been established that the three dimensional random-site model has an equilibrium glass transition at low temperature 11 it is not yet clear whether this is also the case for the lattice model. 12,26 Therefore, we shall only discuss the dynamics of the random-site model in this paper. To model the dynamics of the system we let it evolve through sequential single-electron hops from occupied sites a to empty sites b. The transition rate, that mimics phonon assisted processes, is where τ 0 is a microscopic time, ξ is the spatial extension of the localized wavefunctions and R ab ≡ |R a − R b |. ∆E ab , the total energy difference in the transition, is given by The first factor in Eq.(2) reflects the exponential decay of the electron-phonon matrix element between two electronic wavefunctions centered at positions R a and R b . The second factor is the thermal part of the transition probability. In Monte Carlo simulations performed by other authors the transition probability is taken independent of the distance R ab [the first exponential in the equivalent of Eq.(2) is absent]. 9,11 This type of nonlocal dynamics, convenient for rapid equilibration, may not be appropriate for the study of off-equilibrium relaxation. With the local dynamics of Eq. 2 electron hops that decrease the energy are essentially restricted to a region whose linear size is the localization length ξ. This introduces dynamic constraints that contribute to make the relaxation out of an excited configuration slower. In our simulations we take the mean distance between sites a 0 as the unit of length, the Coulomb energy E C = e 2 /(κa 0 ) as the unit of energy and we choose for convenience ξ = a 0 . We simulated systems of N = L 3 sites and M = N/2 electrons for samples with L =6, 8 and 10 in the temperature range 0.01 ≤ T ≤ 0.1. The localization centers are distributed randomly and uniformly inside a computational cubic box of side L and we take periodic boundary conditions in all directions. To simulate a quench from high temperature we start from a random electron configuration at t = 0 and let the system freely evolve with the dynamics (2) at the working temperature T . The elementary Monte Carlo move consists of an attempt to move an electron from a randomly chosen occupied site a to an empty site b. Once a is chosen, the destination site b is chosen randomly using the probability distribution of the hoppings, P (R ab ) ∝ exp (−2R ab ). A cutoff at R ab = L/2 is imposed by our using of periodic boundary conditions and restricts us to a range of temperatures for which hops at distances R ab ∼ L can be neglected. The probability of acceptance of the move is the thermal factor in Eq. (2). If the hop is accepted, the vector displacement of the hopping electron δr is defined as the vector going from site a to the closest periodic image of site b. A Monte Carlo step (MCS) consists of N hopping attempts. Our runs were typically 2 × 10 6 MCS long. Physical quantities were monitored as a function of time for each sample and the results were averaged over between 150 and 600 realizations of the disorder and initial conditions, depending on the system size and temperature. III. THE LOCAL DENSITY AUTOCORRELATION FUNCTION The two-time charge autocorrelation function is 16 where the brackets denote the average over configurational disorder, initial conditions, and thermal noise, and the waiting time t w is the time elapsed since the quench from infinite temperature. The function C(t + t w , t w ) is the overlap of the charge configurations at times t + t w and t w . If t w is larger than the equilibration time τ eq the state of the system is time-translational invariant and the correlation function depends only on the time difference t. Otherwise, C depends on both t and t w . We describe in the following results for a system of linear size L = 10. Fig. 1 displays C(t+t w , t w ) vs. t for three waiting times, t w = 10 3 , 10 4 and 10 5 and two representative cases, T = 0.07 [ Fig.1(a)] and T = 0.03 [ Fig.1 We observe that in the first case C(t+t w , t w ) ≈ C(t), i.e. the system is time translational invariant. This means that equilibrium was reached within a time shorter than the shortest of the waiting times considered, τ eq (T ) < 10 3 . C(t) is thus the equilibrium relaxation function. In the second case, however, time translation invariance is lost and C(t + t w , t w ) depends on both time arguments, a phenomenon known as aging. The system was thus unable to equilibrate within the time scale of the simulation. Note that for each value of t w the relaxation is very slow (roughly logarithmic) and becomes slower with increasing t w . We found that non-equilibrium relaxation appears below a dynamic crossover temperature T g ∼ 0.05. The value of T g is remarkably close to the equilibrium transition temperature of the random-site model T c = 0.043 determined in Ref. [11]. Further insights on the properties of the correlation functions can be gained by performing a scaling analysis of the data. We discuss separately the cases of high and low temperatures. A. T > Tg In this temperature region the system reaches equilibrium within the simulation time. We display in Fig. 2 the equilibrium correlation function C(t) obtained for several temperatures in the range 0.05 ≤ T ≤ 0.1. As shown in the inset to Fig. 2, the curves for the various temperatures collapse rather well into a single master curve when C(t) is represented as a function of the scaled variable t/τ eq (T ), where τ eq (T ) = exp(T 0 /T ) with T 0 ∼ 0.45. The equilibrium relaxation time thus obeys an Arrhenius law above the dynamic crossover temperature. Qualitatively similar results were obtained in simulations of the two-dimensional version of the random-site model. In previous work on the 3D model using the non-local dynamics described above a power-law divergence of the relaxation time was reported at the transition temperature T c . 11 In our case this temperature T c is of the same order as the dynamic crossover temperature T g for which our samples stay out of equilibrium during the entire simulation time. Therefore, we have no access to the equilibrium critical dynamics of the model. B. T < Tg This is the region in which non-equilibrium slow relaxation and aging are observed. Aging effects can be quantified by performing a non-stationary scaling analysis of the two-time autocorrelation functions. Experimental data in glasses are often analyzed in terms of the scaling form 17,18 where h(u) is known as the time-reparameterization function. A commonly used form is h(u) = exp(u 1−µ /(1−µ)). Since this form implies an effective time scale growing with t w as ∼ t µ w we shall analyze our data in terms of the simpler expression Figure 3(a) illustrates the procedure for T = 0.03. The inset to the figure shows C(t + t w , t w ) as a function of t for three waiting times, t w = 10 3 , 10 4 and 10 5 . The main figure presents the same data plotted as a function of t/t µ w with µ ∼ 0.7, the value for which the collapse of the data at large times t is the best. Repeating this procedure for several different temperatures we obtained the T -dependence of the aging exponent shown in Fig. 3(b). We find sub-aging behavior at low temperatures (µ ≤ 1) in the range T < T g . When T → T g µ decreases steeply to a value close to zero meaning that aging effects become negligible for T > T g ∼ 0.05. The figure also shows the size dependence of the aging exponent. It can also be seen that when the linear size of the system increases from L = 6 to L = 10 the decay of µ near T g ∼ T c becomes steeper. This makes it plausible that µ vanishes (and aging stops) right at the glass transition temperature, T c . Confirmation of this hypothesis would require a more detailed analysis of the L dependence of our results. IV. ELECTRON DIFFUSION The local charge correlation function discussed in the previous Section does not provide direct information on the dynamics of current fluctuations in the medium. Information on this essential aspect of the physics of the Coulomb glass may be obtained from an analysis of carrier diffusion. Let r i (t) = (x i (t), y i (t), z i (t)) denote the position vector of an electron at time t, where x i , y i and z i (t) are its coordinates before they are folded back into the simulation cell. We then have where r i (0) is the electron's initial position, N i (t) is the total number of hops that it performed up to time t and δr i (k) is the displacement associated with the k-th accepted move. The mean squared displacement between times t w and t + t w is defined as where and the angular brackets denote as before an average over realizations of the disorder, initial conditions and the thermal histories. Figure 4 shows the t dependence of ∆(t + t w , t w ) for three values of the waiting time, t w = 10 3 , 10 4 and 10 5 . Data are displayed for two temperatures, T = 0.03 < T g , [ Fig.4(a)], and T = 0.06 > T g , [ Fig.4(b)]. It can be seen that ∆(t + t w , t w ) is time translational invariant for the higher temperature but exhibits aging for the lower one. Moreover, we observe that in the equilibrium regime T > T g the average motion is diffusive, ∆(t + t w , t w ) ≡ ∆(t) ∝ t, while this is not the case in the aging regime T < T g . We discuss first the case of of high temperatures. In this case we can define a diffusion constant through ∆(t + t w , t w ) = D(T )t. We attempted to fit our results using the stretched-exponential expression D(T ) ∼ exp[−(T 1 /T ) β ]. Using this form, a plot of T β ln D −1 as a function of T should result in a horizontal line. The inset to Fig. 5(a) shows that this is indeed obtained for β ∼ 1. The figure also shows the result of a similar plot using the value β = 0.5 expected from the Efros-Shklovski variable range hopping law. It is apparent that our data can not be described with this value of β. Similar deviations from the Efros-Shklovski law were also found in the 2D version of the model 15,27 for which we found β ≈ 3/4. 27 We now turn to the analysis of the low-temperature results. In this case the time dependence of ∆ can not be described by a simple power law but we can still characterize the diffusion in this regime by fitting the t dependence of ∆(t + t w , t w ) for our longest times (the last decade in t, for example) to an expression of the form Equation (10) defines an effective diffusion exponent η that depends on both the waiting time and the temperature. The asymptotic fits for T = 0.03 are represented by the dotted lines in Fig. 4a where the normal diffusion limit, η = 1, is also shown for comparison. Figure 5 summarizes our results for the diffusion exponent in the aging region. The temperature dependence of η for several values of t w is shown in Fig. 5a. It can be seen that in the equilibrium regime, T > T g , η = 1 for all t w and T . Below T g , however, the diffusion exponent decreases with decreasing temperature for all values of t w reflecting that electron motion becomes increasingly sluggish. The dependence of η with t w at fixed T is shown in Fig. 5b for several temperatures. The exponent η increases with t w , eventually reaching the diffusion limit η = 1 for long waiting times. At low temperature this variation is slow (approximately logarithmic). This is yet another manifestation of the slow relaxation that characterizes the glassy phase of the Coulomb glass. The characterization of diffusion through a diffusion exponent is familiar in the study of random walks in random media were one generally finds sub-diffusive behavior (η < 1) in those physical situations in which the distribution of the time intervals between successive hops of the diffusing particle has sufficiently long tails that the central limit theorem no longer holds. 28 It would be interesting to examine the distribution of this times in the aging regime of our system. V. HETEROGENEOUS DYNAMICS To establish a relationship between the observed aging effects and the microscopic motion of electrons we analyzed the evolution of the diffusion front. This latter is defined through the probability density of the squared displacements: where δ(x) is the usual δ-function. It is easy to show that for a stationary and homogeneous diffusion process the above distribution takes the form where Φ(x) is approximately Gaussian. H diff 1 thus exhibits a single peak whose position increases linearly with time. To compute H 1 from our numerical data we must appropriately coarse-grain the variable ∆x 2 . Since we found that the distribution of electron squared displacements is very broad we chose a regular coarse graining in log(∆x 2 ). In Fig. 6(a) we show the evolution of the computed H 1 (∆x 2 , t, t w ) as a function of log(∆x 2 ) for t w = 10 6 and T = 0.060 > T g . For this value of the temperature H 1 is independent of the waiting time if t w is large enough. This is consistent with the observed time translation invariance of the charge autocorrelation functions and the mean squared displacement at this temperature. The histograms shown in the figure were scaled conveniently and shifted vertically by an amount log(t) to make their baselines coincide with the time they correspond to. Note that the diffusion front, located initially at ∆x 2 = 0, splits rapidly in two peaks. The position of the first peak is almost time independent at long times. This peak corresponds to squared displacements smaller than the average impurity distance, a 0 . The center of the second peak increases linearly with time and its location coincides with the mean squared displacements at long times. The interpretation of these results is that the electron dynamics is heterogeneous and characterized by the existence of two dynamical modes that can be clearly distinguished: (a) Diffusive mode: electron motion is unbound and diffusive. It corresponds to metallic hopping Examples of trajectories of electrons of these two types are shown in Fig. 6(b) at the same temperature and for the same waiting time as above. The displacements are represented as a function of the hop number N i . We only show the ten first hops after a waiting time t w = 10 6 . Some of the trajectories correspond to electrons that hop back and forth between two sites. ∆x 2 then oscillates between 0 and b 2 i , the distance between the sites involved in the motion. This fluctuating dipolar motion contributes to most of the weight of the first peak in the histograms of Fig. 6(a). It is important to note that, although this fluctuating motion appears regular when plotted as a function N i , it is in fact extremely irregular when viewed as a function of time, since the time intervals between successive jumps are very widely distributed. The rest of the trajectories shown in Figure 6(b) are those of diffusive electrons that contribute to the metallic peak in the histograms of Fig. 6(a). We turn now to the analysis of the statistics of hopping rates which is important to understand the source of spontaneous fluctuations in the system. To this end we consider the joint distribution function where n i = N i /N h is the number of hops of an electron i normalized by N h = i N i /M , the average number of hops per electron. We use a regular coarse graining in log(∆x 2 ) and n h to compute the corresponding histograms. These are displayed in Figs. 7(a) and 7(b) for T = 0.070 > T g and T = 0.040 < T g , respectively. Both plots correspond to t = t w = 10 6 . The two dynamical modes described above can be easily identified in the equilibrium situation [ Fig. 7(a)]. The single peak at large ∆x 2 corresponds to the diffusive mode while the two ridges at low values of ∆x 2 correspond to the dipolar mode. Note that the hopping rates of electrons involved in dipolar motion have a much wider distribution than those of diffusive electrons. Fig.7(b) shows that the two modes are still distinguishable at low temperatures, when the system is out of equilibrium. The structure of the modes is qualitatively different, however. Not only the distribution of hopping rates is now much broader but a small fraction of electrons with low hopping rate lies in between the two modes. The presence of these electrons, that can not be clearly associated with any of the modes, suggests that a very slow exchange of carriers between them may take place in the course of the relaxation. We shall further discuss this issue below. To explore in more detail the properties of electrons contributing to each of these modes we found it convenient to define for each electron the variable This quantity characterizes the mobility of the electron during a time interval of length t after t w . For a carrier that diffuses normally in this time span with a diffusion constant D, d i ∼ D. For an electron that remained confined in a region of linear size a during this time interval, d i ∼ a 2 ln(t)/t. Finally, for a frozen electron, i.e., one that did not move at all in the time interval under consideration, d i = 0. We study the probability density of d defined as As discussed above we expect f (d, t w ) to exhibit to well separated peaks: one at d = D(t w ), the average diffusion constant of the diffusing electrons at time scale t w , and the other at d = a(t w ) 2 ln(t)/t where a(t w ) is the typical size of the regions of confined motion at the same time scale. The width of the peaks give the dispersion of these quantities. We also introduce the cumulative distribution function Results are displayed in Fig. 8 where we plot f (d, t w ), and its cumulative distribution function for three temperatures, T = 0.02 < T g , T = 0.035 < T g and T = 0.060 > T g and waiting times t w = 10 n with n =0,1,2,3,4,5 and 6. In all cases we see the appearance of the two peaks referred to above. The distributions are stationary for T > T g for which the system equilibrates rapidly but they show aging in the non-equilibrium regime. A striking feature of the function f in the aging regime [cf. Fig. 8(a,c)] is that the positions of the peaks are almost independent of t w . This means that the diffusion constant of the "metallic" electrons is time-independent. The height of the peaks, however, does depend on time scale: as time elapses from the quench, the proportion of diffusing carriers diminishes while that of confined ones increases. This is a direct manifestation of the exchange mechanism that we hinted at above. The fact that the location of the peaks is only weakly dependent on t w indicates that the effective mobility of the diffusive electrons is not much affected by the slow changes of the environment that result from the aging process. The plateaus that appear in the corresponding cumulative distribution functions right after the peaks [cf. Figs. 8(b,d,f)] can be used to measure the relative populations of the modes. We see a first plateau corresponding to the area of f (d, t w ) below the dipolar peak and a second plateau corresponding to the additional area below the metallic peak. It can be seen that the cumulative distribution function does not saturate to unity at low temperatures [cf. Figs. 8(b,d)] while it does at high temperatures [cf. Fig. 8(f)]. The difference is due to the fact that, at low temperature, a fraction of the electrons remain frozen during the observation time. These were not counted in the numerical evaluation of the integral in Eq. (16). Another manifestation of the presence of frozen carriers is the pronounced asymmetry of the two lower ridges in the lower panel of Fig. 7 in the zero hopping-rate limit. We can now use the height of the plateaus in Fig. 8(b,d,f) to measure the populations of the different modes. These are represented as a function of temperature for t w = 10 6 in Fig. 9(a). It is seen that the proportion of diffusive electrons increases with increasing temperature while, at the same time, that of dipolar and frozen ones decreases. Note that a crossover between the regime dominated by diffusive electrons and that dominated by confined ones is located precisely at T g . The waiting-time dependence of the populations is shown in Fig. 9(b) for two temperatures, T = 0.03 < T g (left panel) and T = 0.06 > T g (right panel). These populations are time independent at the highest temperature but vary logarithmically with time in the aging regime. It is quite tempting to try to relate these observations to the relaxational properties of the conductivity. Assuming that the Einstein relation holds in the non-equilibrium regime, the conductivity at scale t is σ ∝ n dDf (D, t), where n is the electron density and the integral extends over the diffusive peak. We saw earlier that the position of the peak D is time independent. This implies σ ∼ np D D where p D is the fraction of diffusing carriers. Since the latter decreases logarithmically with time, this simple argument predicts logarithmic relaxation of the conductivity which is one of the main experimental observations. Whether the assumptions leading to this result are valid has to await direct computation of the current in the aging regime, in the presence of an applied electric field. 27 VI. CONCLUSIONS We have studied the relaxational properties of the three dimensional random-site Coulomb glass model after a quench from high temperature. We found a crossover from stationary to slow non-stationary dynamics at a temperature T g that is very close to T c , the equilibrium glass transition temperature of the model. This crossover can be seen even in relatively small samples because of the exponential increase of the equilibration time with decreasing temperature. We found that at low temperature the dynamics of local charge fluctuations and that of current fluctuations show aging. In the former case, the relaxation obeys simple scaling laws characterized by a temperaturedependent aging exponent µ(T ). Analysis of the temperature and system-size dependence of µ(T ) suggests that in the thermodynamic limit the observed crossover at T g ∼ T c becomes a real dynamic transition that occurs precisely at T c . The analysis of the properties of diffusion fronts re-vealed that the dynamics of carriers is heterogeneous as it was found in other glassy systems. 21 We found that, for each timescale two classes of electrons may be identified, those that have diffusive motion during the observation time and those whose motion in the same time interval remains confined. Only electrons belonging to the former class contribute to the dc conductivity while the others only contribute to the dielectric screening. In the region of low temperatures where aging is observed electrons are slowly exchanged between these two modes with the consequence that the population of metallic electrons decreases logarithmically with time without appreciable change of the their diffusion constant. This provides a plausible explanation for the logarithmic relaxation of the conductance after a quench that was observed experimentally. We believe that the local microscopic dynamics used here, which is a realistic description of hopping processes in experimental systems, plays an important role in the phenomena that we observed. This type of dynamics favors the appearance of local effective constraints that relate our model to kinetically constrained models in which slow dynamics arises from restrictions on the allowed transitions between configurations.
2014-10-01T00:00:00.000Z
2004-07-28T00:00:00.000
{ "year": 2004, "sha1": "7cac7fbc4463279240438415faffcb3f0841911c", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/0407734", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1038cf748c4584941baeb471041b1b42eba8bc2a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
246907714
pes2o/s2orc
v3-fos-license
Work Motivation: The Roles of Individual Needs and Social Conditions Work motivation plays a vital role in the development of organizations, as it increases employee productivity and effectiveness. To expand insights into individuals’ work motivation, the authors investigated the influence of individuals’ competence, autonomy, and social relatedness on their work motivation. Additionally, the country-level moderating factors of those individual-level associations were examined. Hierarchical linear modeling (HLM) was used to analyze data from 32,614 individuals from 25 countries, obtained from the World Values Survey (WVS). Findings showed that autonomy and social relatedness positively impacted work motivation, while competence negatively influenced work motivation. Moreover, the individual-level associations were moderated by the country-level religious affiliation, political participation, humane orientation, and in-group collectivism. Contributions, practical implications, and directions for further research were then discussed. Introduction Work motivation is considered an essential catalyst for the success of organizations, as it promotes employees' effective performance. To achieve an organization's objectives, the employer depends on the performance of their employees [1]. However, insufficiently motivated employees perform poorly despite being skillful [1,2]. Employers, therefore, need their employees to work with complete motivation rather than just showing up at their workplaces [3]. Work motivation remains a vital factor in organizational psychology, as it helps explain the causes of individual conduct in organizations [4]. Consequently, studies on the factors that encourage work motivation can contribute to the theoretical underpinnings on the roots of individual and practical social conditions that optimize individuals' performance and wellness [5]. Several decades of research have endeavored to explain the dynamics that initiate work-related behavior. The primary factor examining this aspect is motivation, as it explains why individuals do what they do [6]. The basic psychological needs have represented a vital rationalization of individual differences in work motivation. Psychological needs are considered natural psychological nutrients and humans' inner resources. They have a close relationship with individual conduct and have a strong explicit meaning for work performance [7,8]. Different needs are essential drivers of individual functioning due to the satisfaction derived from dealing with them [9]. In addition to individual-level antecedents, the social context has also been regarded to have implications for work motivation. Social exchange and interaction among individuals accentuate the importance of work motivation as something to be studied with consideration of contextual factors [10]. Significant contributions have been made to the socio-psychological perspective of work motivation (Table 1). However, current literature shows three deficiencies. First, over 150 papers utilize the key approaches of psychological needs to justify motivational processes in the workplace [11], which justifies the vital role of psychological needs in interpreting individual work motivation. The association between psychological needs and work motivation has often been implicitly assumed; however, the influence of psychological needs on work motivation has been inadequately tested [8]. The verification of the extent and the direction of influence will provide a better understanding of, and offer distinct implications for, the facilitation of work motivation. In examining the influence of psychological needs on work motivation, this paper mainly focuses on the intrinsic aspect of motivation. The study of Alzahrani et al. (2018) [12] argued that although intrinsic motivation is more efficient than extrinsic motivation, researchers have mostly neglected it. Table 1. Several investigated predictors of work motivation in general and intrinsic motivation in particular. Contextual factors Cultures Bhagat et al., 1995 [15]; Erez, 1994Erez, /1997Erez, /2008 [16][17][18] Social situations Deci & Ryan, 2012 [19] Psychological needs (but inadequacy) Olafsen et al., 2018 [8] Second, there is no study examining the country-level moderating effects of social conditions and national cultures on individual relationships between psychological needs and work motivation. Pinder (2014) [20] argued that contextual practices could influence variables at the individual level. Culture is a crucial factor influencing motivation [15][16][17][18]. Researchers (e.g., [19]) have further suggested that both the proximal social situations (e.g., workgroup) and the distal social situations (e.g., cultural values) in which humans operate influence their need for satisfaction and their motivation type. Intrinsic motivation interacts with prosocial motivation in judging work performance [21]. By including the social conditions in the framework, prosocial motivation is considered. Prosocial motivation refers to the desire to help and promote the welfare of others [22,23]. The study of Shao et al. (2019) [24] proposed that prosocial motivation promotes employee engagement in particular organizational tasks. Researchers often consider prosocial motivation as a pattern of intrinsic motivation [23]. This implies that when intrinsic motivation is investigated, prosocial motivation should be examined together to obtain a comprehensive understanding. Third, there are few studies using a considerable number of cross-national samples to investigate factors influencing work motivation. A cross-cultural analysis makes the findings more objective by minimizing individual bias towards any particular culture. Therefore, the examination of the study is crucial to expanding insights on the influence of social situations on the individual associations between psychological needs and work motivation. Work Motivation: A Conceptual Background Work motivation is considered "a set of energetic forces that originate both within as well as beyond an individual's being, to initiate work-related behavior, and to determine its form direction intensity and duration" [20]. Nicolescu and Verboncu (2008) [25] argued that work motivation contributes directly and indirectly to employees' performance. Additionally, research (e.g., [26]) has postulated that work motivation could be seen as a source of positive energy that leads to employees' self-recognition and self-fulfillment. Therefore, work motivation is an antecedent of the self-actualization of individuals and the achievement of organizations. Literature has identified several models of work motivation. One of the primary models is Maslow's (1954) [27] need hierarchy theory, which proposes that humans fulfill a set of needs, including physiological, safety and security, belongingness, esteem, and self-actualization. Additionally, Herzberg's (1966) [28] motivation-hygiene theory proposed that work motivation is mainly influenced by the job's intrinsic challenge and provision of opportunities for recognition and reinforcement. More contemporary models also emerged. For instance, the study of Nicolescu and Verboncu (2008) [25] has categorized the types of motivation into four pairs, including positive-negative, intrinsic-extrinsic, cognitiveaffective, and economic-moral spiritual. Additionally, Ryan and Deci [29] focused on intrinsic motivation and extrinsic motivation. With the existence of numerous factors that relate to work motivation, this paper mainly focuses on intrinsic motivation. Previous research found that emotional intelligence and interpersonal relationship quality predict individuals' intrinsic motivation [14]. Additionally, the study of Lin (2020) [13] argued that personal factors, including age, gender, educational level, living setting, health status, and family support, impact people's intrinsic motivation. To understand more about intrinsic motivation, the authors examined individuals' psychological needs. Fulfillment of the basic needs is related to wellness and effective performance [7]. Since intrinsic motivation results in high-quality creativity, recognizing the factors influencing intrinsic motivation is important [5]. Although a significant number of important contributions have been made regarding intrinsic motivation, self-determination theory is of particular significance for this study. Self-determination theory (SDT) postulates that all humans possess a variety of basic psychological needs. One of the primary crucial needs is the need for competence [30,31], which makes individuals feel confident and effective in their actions. Additionally, the need for autonomy [32] is one of the important psychological needs, which makes people satisfied with optimal wellness and good performance obtained as a result of their own decisions. Moreover, SDT proposed the crucial importance of interpersonal relationships and how social forces can influence thoughts, emotions, and behaviors [33]. This means that the psychological need for social relatedness [34] also plays a significant role in human's psychological traits. Individuals need to be cared for by others and care for others to perceive belongingness. The need for relatedness can motivate people to behave more socially [35]. Prior research (e.g., [36]) has explored self-determination theory and related theories as approaches to work motivation and organizational behavior. The study of Van den Broeck et al. (2010) [37] emphasized grasping autonomy, competence, and relatedness at workplaces. This paper contributes to the exhaustive understanding of intrinsic work motivation influenced by further examining the impact of these three factors on work motivation as well as the moderating effects of social contexts. Individuals' Competence and Work Motivation Competence is "the collective learning in the organization, especially how to coordinate diverse production skills and integrate multiple streams of technologies" [38]. The study of Hernández-March et al. (2009) [39] argued that a stronger competence was commonly found in university graduates rather than those without higher education. Competence has been considered a significant factor of work motivation that enhances productivity and profits. Harter's (1983) [40] model of motivation proposed that competence enhances motivation because competence promotes flexibility for individuals [41]. Likewise, Patall et al. (2014) [42] indirectly argued that competence positively affects work motivation. Individuals become more engaged in activities that demonstrate their competence [6]. When people perceive that they are competent enough to attain goals, they generally feel confident and concentrate their efforts on achieving their objectives as soon as possible for their self-fulfillment. Individuals' Autonomy and Work Motivation Autonomy is viewed as "self-determination, self-rule, liberty of rights, freedom of will and being one's own person" [43]. Reeve (2006) [44] argued that autonomy is a primary theoretical approach in the study of human motivation and emotion. Autonomy denotes that certain conduct is performed with a sense of willingness [30]. Several researchers (e.g., [45]) investigated the positive relationship between individuals' autonomy and work motivation. When humans are involved in actions because of their interest, they fully perform those activities volitionally [36]. Dickinson (1995) [46] also proposed that autonomous individuals are more highly motivated, and autonomy breeds more effective outcomes. Moreover, when individuals have a right to make their own decisions, they tend to be more considerate and responsible for those decisions, as they need to take accountability for their actions. Bandura (1991) [47] has argued that humans' ability to reflect, react, and direct their actions motivates them for future purposes. Therefore, autonomy motivates individuals to work harder and overcome difficulties to achieve their objectives. Individuals' Social Relatedness and Work Motivation The psychological need for social relatedness occurs when an individual has a sense of being secure, related to, or understood by others in the social environment [48]. The relatedness need is fulfilled when humans experience the feeling of close relationships with others [49]. Researchers (e.g., [34]) have postulated that the need for relatedness reflects humans' natural tendency to feel associated with others, such as being a member of any social groups, or to love and care as well as be loved and cared for. Prior studies have shown that social relatedness strongly impacts motivation [50][51][52]. Social relatedness offers people many opportunities to communicate with others, making them more motivated at the workplace, aligning them with the group's shared objectives. Marks (1974) [53] suggested that social relatedness encourages individuals to focus on community welfare as a reference for their behavior, resulting in enhanced work motivation. Moreover, when individuals feel that they relate to and are cared for by others, their motivation can be maximized since their relatedness need is fulfilled [54]. Therefore, establishing close relationships with others plays a vital role in promoting human motivation [55]. When people perceive that they are cared for and loved by others, they tend to create positive outcomes for common benefits to deserve the kindness received, thereby motivating them to work harder. Hypothesis 3 (H3). Individuals' social relatedness positively relates to their work motivation. Aside from exploring the influence of psychological needs on work motivation, this paper also considers country-level factors. Previous research (e.g., [56]) has examined the influence of social institutions and national cultures on work motivation. However, the moderating effects of country-level factors have to be investigated, given the contextual impacts on individual needs, attitudes, and behavior. Although social conditions provide the most common interpretation for nation-level variance in individual work behaviors [57], few cross-national studies examine social conditions and individual work behaviors [56]. Hence, this paper investigates the moderating effects, including religious affiliation, political participation, humane orientation, and in-group collectivism, on the psychological needs-work motivation association. A notable theory to explain the importance of contextual factors in work motivation that is customarily linked with SDT is the concept of prosocial motivation. Prosocial motivation suggests that individuals have the desire to expend efforts in safeguarding and promoting others' well-being [58,59]. It is proposed that prosocial motivation strengthens endurance, performance, and productivity, as well as generates creativity that encourages individuals to develop valuable and novel ideas [21,60]. Prosocial motivation is found to interact with intrinsic motivation in influencing positive work outcomes [21,61]. However, there are few studies examining the effects of prosocial motivation on work motivation [62]. Utilizing the concept of prosocial motivation and examining it on a country-level, this paper suggests that prosocial factors promote basic psychological needs satisfaction that reinforces motivational processes at work. Therefore, prosocial behaviors and values may enhance the positive impact of individuals' basic psychological needs, including competence, autonomy, and social relatedness, on work motivation. Religious Affiliation Religions manifest values that are usually employed as grounds to investigate what is right and wrong [63]. Religious affiliation is considered prosocial because it satisfies the need for belongingness and upholds collective well-being through gatherings to worship, seek assistance, and offer comfort within religious communities. Hence, religious affiliation promotes the satisfaction of individuals' psychological needs, which directs motivation at work and life in general. Research (e.g., [64]) has argued that religious affiliation is an essential motivational component given its impact on psychological processes. The study of Simon and Primavera (1972) [65] investigated the relationship between religious affiliation and work motivation. To humans characterized by competence, autonomy, and social relatedness, attachment to religious principles increases their motivation to accomplish organizational goals. Religious membership will increase the influence of psychological needs on work motivation. The tendency of individuals affiliated with any religion to be demotivated is lower compared to those who are not. Individuals with religious affiliations also tend to work harder as the virtue of hard work is aligned with religious principles. Accordingly, religious affiliation may enhance the positive association between individuals' psychological needs and work motivation. Political Participation Political participation, indicated by people's voting habits, plays a crucial role in ensuring citizens' well-being and security [66]. Political participation encourages shared beliefs and collective goals among individuals [67]. The communication and interaction among people help them grasp the government's developmental strategies, motivating them to work harder. Political participation is a collective pursuit that makes societal members feel more confident, socially related, and motivated at work to achieve communal targets. Increased political participation reinforces effective public policy to enhance its members' welfare, congruent with the perspectives of prosocial motivation. The prosocial values and behaviors derived from political participation satisfy human needs and interact positively with intrinsic motivation. Therefore, political participation may strengthen the positive influence of individuals' competence, autonomy, and social relatedness on work motivation. Conversely, poor political participation is perceived as a separation from the society that may lead to demotivation. In a society with poor political participation, an individualistic mentality is encouraged, thereby decreasing the desire to pursue cooperative endeavors. Humane Orientation GLOBE characterizes humane orientation as "the degree to which an organization or society encourages and rewards individuals for being fair, altruistic, generous, caring, and kind to others" [68]. Research (e.g., [69,70]) has argued that a high humane orientation encourages members to develop a strong sense of belonging, commit to fair treatment, and manifest benevolence. The desire to help others or enhance others' well-being indicates prosocial values and behaviors [71,72]. Since humane orientation is correlated with philanthropy and promotes good relations, this cultural value may enhance work motivation. Fairness, which is derived from a humane-oriented society, is one of the most vital influences on work motivation [1]. Moreover, altruism, promoted by humane-oriented societies, encourages individuals to sacrifice individual interests for shared benefits. Altruism then encourages attachment to others' welfare and increases resources needed for prosocial behaviors such as work [73,74]. Members of humane-oriented countries view work in a positive light-it is an opportunity for them to perform altruistic behaviors and engage in collective actions. Therefore, people are more likely to work harder for common interests in humane-oriented societies. In such conditions, individuals with competence, autonomy, and social relatedness will be more motivated to work. By contrast, a less humane-oriented society gives prominence to material wealth and personal enjoyment [75]. Although this may be perceived as a positive influence on the association between psychological needs and work motivation, such an individualistic mindset works against the prosocial factors that further motivate individuals. In-Group Collectivism House et al. (2004) [68] defined in-group collectivism as "the degree to which individuals express pride, loyalty, and cohesiveness in their organizations or families". Collectivistic cultures indicate the need for individuals to rely on group membership for identification [76]. High collectivism enhances equity, solidarity, loyalty, and encouragement [77,78]. Humans living in a collectivist culture are interdependent and recognize their responsibilities towards each other [79]. In-group collectivism transfers the concepts of social engagement, interdependence with others, and care for the group over the self (e.g., [79][80][81], thereby motivating individuals to work harder for the common interests. Oyserman et al. (2002) [82] have further argued that individualistic values encourage an independent personality, whereas collectivistic values form an interdependent one. Therefore, in-group collectivism is a prosocial value that emphasizes the importance of reciprocal relationships and encourages people to work harder to benefit the group. By contrast, low collectivism promotes individual interests and personal well-being while neglecting the value of having strong relations with others [70]. Considering that in-group collectivism promotes individuals' prosocial behaviors of individuals, people who are competent, autonomous, and socially related to collective societies are less likely to be demotivated at the workplace. Consequently, in-group collectivism may intensify the positive influence of individuals' competence, autonomy, and social relatedness on their work motivation. Hypothesis 4 (H4). (a-d): The positive relationship between individuals' competence and their work motivation is enhanced as religious affiliation (a), political participation (b), humane orientation (c), and in-group collectivism (d) increase. Sample The data came from the seventh wave (2017-2021) of the World Values Survey (WVS) [83], which examines humans' beliefs and values. This survey is performed every five years to explore changes in people's values and perceptions. Face-to-face interviews, or phone interviews for remote areas, were conducted by local organizations. Almost 90 percent of the world's population is represented in the WVS. At least 1000 individuals were selected as respondents to exhibit each nation's population. Further information regarding the WVS can be reached at the WVS website (http://www.worldvaluessurvey.org, accessed on 14 October 2021). The samples of this study were based on the availability of national-level data for the moderators and individual-level data for the measures of independent and dependent variables. Respondents without answers on the individual measures and corresponding country-level data were excluded from the analysis. The final data included 32,614 respondents in 25 countries aged 18 and above. The 25 countries included Argentina, Australia, Brazil, China, Colombia, Ecuador, Egypt, Germany, Greece, Guatemala, Hong Kong, Indonesia, Iran, Japan, Kazakhstan, Malaysia, Mexico, New Zealand, Philippines, Russia, South Korea, Taiwan, Thailand, Turkey, and the USA. Dependent Variable Consistent with previous researchers (e.g., [84]), the authors used four items to gauge individual work motivation, namely "Indicate how important work is in your life", "People who do not work turn lazy", "Work is a duty towards society", and "Work should always come first, even if it means less spare". The first item was measured on a scale from 1 to 4, in which lower scores indicate a higher level of work importance. The other three items were gauged on a scale from 1 to 5 (1 indicating strongly agree and 5 indicating strongly disagree). The scores for each item were reverse coded, and the mean scores were computed so that higher scores indicate greater work motivation. Independent Variables The independent variables of this study include individuals' competence, autonomy, and social relatedness. First, people's competence was measured by the item "What is the highest educational level that you attained" on a scale from 0 to 8, in which higher scores indicate a higher level of educational attainment. The authors used the item to gauge individual competence, as a capacity for learning is highlighted in the examination of competence [39]. Second, a scale from 1 to 10 was utilized to measure the item "How much freedom of choice and control", which represented individual autonomy (1 indicating no choice at all and 10 indicating a great deal of choice). The authors used the item to gauge people's autonomy as this item indicates the degree to which individual can make their own decisions. Finally, the individual's social relatedness was gauged by twelve items, representing twelve types of organizations where individuals are active/inactive members or do not belong. The twelve items were measured on a scale from 0 to 2 (0 indicating do not belong, 1 indicating inactive member, and 2 indicating active member). The mean score of the twelve items represents the individual's social relatedness. The membership in organizations represents social relatedness, as this indicates the reciprocal relationship between the individual and the organization through their mutual rights, responsibilities, and obligations towards each other [85]. Moderators The four country-level moderators in this study were religious affiliation, political participation, humane orientation, and in-group collectivism. Similar to prior research (e.g., [86]), the authors used the percentage of the country's population with religious affiliation obtained from Pew Research Center 2015 [87]. Secondly, the index of voter turnout collected from the International Institute for Democracy and Electoral Assistance [88] was utilized to gauge political participation. Voting habits are an indicator of an individual's presence in their country's life, and a nation with a high index of voter turnout illustrates its substantial degree of political participation [89]. Finally, two cultural values, including humane orientation and in-group collectivism, were obtained from the GLOBE study [68]. The authors used scores on cultural practices as the moderators for this study because they indicate the actual behaviors as "the way things are done in this culture" [68]. Control Variables Several individual-level and country-level elements related to the dependent variable were considered control variables. The effects of gender, marital status, age, and income level were accounted for, as these four variables are basic personal factors that may impact individual's motivation [90]. Gender (1 indicating male and 0 indicating female) and marital status (1 indicating married and 0 indicating other status) were dummy coded. Moreover, age was measured in years, while income level was gauged using a scale from 1 representing the lowest group to 10 representing the highest group. Along with the above individual-level controls, education and family strength were treated as country-level control variables. Education and family are primary institutions that shape individuals' motivation [91,92]. Similar to prior researchers (e.g., [93]), education was computed as twothirds of the adult literacy rate attained from the UNESCO Institute for Statistics 2020 [94] and one-third of the mean years of schooling obtained from the Human Development Report 2020 [95]. This score is commonly approved as representing access to education in a country [42]. Regarding family strength, the score was quantified by the ratio of divorces to marriages per 1000 members of the population consistent with previous researchers (e.g., [93]). The data was obtained from the United Nations Demographic Yearbook [96]. Measurement and Analysis To perform the descriptive statistics, cross-level correlations, scale reliability, confirmatory factor analysis, convergent validity, and discriminant validity, the authors utilized SPSS software. The framework of this study considers independent variables, dependent variables, and moderators at different levels. Thus, the authors used a hierarchical linear model (HLM) [97] to test the hypotheses. HLM was defined as a "complex form of ordinary least squares (OLS) regression that is used to analyze variance in the outcome variables when the predictor variables are at varying hierarchical levels" [98]. This technique evaluates the impacts of higher-level outcomes on lower-level ones while preserving an appropriate degree of analysis [99]. HLM has been employed in several cross-level studies (e.g., [100,101]). Table 2 presents a matrix of correlations and sample statistics from the individual-level to country-level variables. Tables 3 and 4 report convergent and discriminant validity test results, respectively. Finally, Table 5 illustrates results for hypotheses testing using HLM. Three models are presented in the table: those of individual-level main effects and control variables (Model 1), those of country-level main effects (Model 2), and country-level moderating effects (Model 3). Results For the confirmatory factor analysis, previous research (e.g., [102][103][104]) suggested that analysis of each variable requires at least three items. Factor analysis using statistical software will provide imprecise results if there are fewer than three items per variable [105]. Therefore, the authors only performed Confirmatory Factor Analysis (CFA) for social relatedness and work motivation. To assess the measurement, convergent and discriminant validity were tested. Composite Reliability (CR) and Average Variance Extracted (AVE) were performed to illustrate convergent validity. The study of Hair et al. (2019) [106] suggested that CR is required to be above a threshold of 0.7. On the other hand, the AVE value should be higher than a threshold of 0.5 [107]. As shown in Table 3, CR is acceptable while AVE is slightly lower than a threshold of 0.5. Despite the limitation of AVE, the acceptable result of the discriminant validity is achieved. The discriminant validity was tested using Fornell and Larcker (1981)'s criterion [107]. This proposes that the square root of the AVE of any latent variable should be higher than its correlation with any other construct. The result of the discriminant validity test indicates that all the two latent constructs have a square root of AVE higher than its correlation with the other construct, as presented in Table 4. The authors argued that individuals' competence (H1), autonomy (H2), and social relatedness (H3) positively relate to their work motivation. However, the findings only supported H2 (β2 = 0.036, p < 0.001) and H3 (β3 = 0.042, p < 0.001). In contrast, the findings presented that H1 was also significant, but in the opposite direction compared with our original prediction. The result suggests that individuals' competence negatively relates to their work motivation. In Hypotheses 4a-d, we proposed that higher levels of religious affiliation (4a), political participation (4b), humane orientation (4c), and in-group collectivism (4d) strengthen the relationship described in H1. However, the results only demonstrated support for the two hypotheses, H4c (γ13 = 0.032, p < 0.001) and H4d (γ14 = 0.042, p < 0.001). In contrast, the findings presented that H4a was also significant, but opposite our initial prediction. This different result proposes that a higher level of religious affiliation weakens the association between individuals' competence and work motivation. In Hypotheses 5a-d, the authors argued that the higher levels of religious affiliation (5a), political participation (5b), humane orientation (5c), and in-group collectivism (5d) enhance the positive relationship between individuals' autonomy and their work motivation. However, the results only supported the two hypotheses H5b (γ22 = 0.012, p < 0.05) and H5c (γ23 = 0.012, p < 0.1), while H5a and H5d were not significant. In Hypotheses 6a-d, the authors argued that the higher levels of religious affiliation (6a), political participation (6b), humane orientation (6c), and in-group collectivism (6d) enhance the positive relationship between individuals' social relatedness and their work motivation. However, the results only supported H6c (γ33 = 0.019, p < 0.01). In contrast, the findings indicated that H6d was also significant, but in the opposite direction compared to our initial hypothesis. The different result suggests that higher in-group collectivism weakens the positive association between individuals' social relatedness and work motivation. Figures 1-5 represent the significant moderators of the associations examined. Regarding the statistical results of the control variables, gender, marital status, and age consistently indicated significant positive relationships with work motivation across three models. On the other hand, family strength indicated a significant negative association to work motivation only in Model 1. Discussion The study's objective was to examine the influence of individuals' competence, autonomy, and social relatedness on their work motivation, as well as the impact of countrylevel moderators, including religious affiliation, political participation, humane orientation, and in-group collectivism on their relationships. Seven primary findings are crucial in this research. First, people's autonomy and social relatedness positively relate to their work motivation. This result is in line with the findings of prior researchers (e.g., [45,52]), postulating that humans' autonomy and social relatedness breeds work motivation. The study of Theurer et al. (2018) [108] argued that, among motivational elements, autonomy had been found to greatly predict positive work motivation. When people feel they have enough control over their activities, they are more confident and motivated to work. Along with autonomy, humans' social relatedness promotes communal benefits, thereby motivating people to work harder for their organization. Second, the association between individual competence and work motivation is moderated by cultural values, including humane orientation and in-group collectivism. The findings are consistent with the viewpoints of prior researchers (e.g., [69,70,77,78]), namely that a society with higher levels of humane orientation and in-group collectivism strengthens altruism, solidarity, loyalty, and the encouragement of individuals, which results in work motivation. Consequently, there will be an increase in the differences in individuals' competence and work motivation if they live in a society with greater humane orientation and in-group collectivism. Third, political participation and humane orientation moderate the relationship between individual autonomy and work motivation. These results are in line with the investigations of prior researchers (e.g., [18,45), which found that social circumstances and cultural practices promote people's motivation. Accordingly, the differences in individuals' autonomy based on their work motivation will be enhanced if they belong to nations with higher political participation and humane orientation. Fourth, the association between social relatedness and work motivation is moderated by humane orientation. Accordingly, in a humane-oriented society, the differences in individuals' social relatedness based on Discussion The study's objective was to examine the influence of individuals' competence, autonomy, and social relatedness on their work motivation, as well as the impact of country-level moderators, including religious affiliation, political participation, humane orientation, and in-group collectivism on their relationships. Seven primary findings are crucial in this research. First, people's autonomy and social relatedness positively relate to their work motivation. This result is in line with the findings of prior researchers (e.g., [45,52]), postulating that humans' autonomy and social relatedness breeds work motivation. The study of Theurer et al. (2018) [108] argued that, among motivational elements, autonomy had been found to greatly predict positive work motivation. When people feel they have enough control over their activities, they are more confident and motivated to work. Along with autonomy, humans' social relatedness promotes communal benefits, thereby motivating people to work harder for their organization. Second, the association between individual competence and work motivation is moderated by cultural values, including humane orientation and in-group collectivism. The findings are consistent with the viewpoints of prior researchers (e.g., [69,70,77,78]), namely that a society with higher levels of humane orientation and in-group collectivism strengthens altruism, solidarity, loyalty, and the encouragement of individuals, which results in work motivation. Consequently, there will be an increase in the differences in individuals' competence and work motivation if they live in a society with greater humane orientation and in-group collectivism. Third, political participation and humane orientation moderate the relationship between individual autonomy and work motivation. These results are in line with the investigations of prior researchers (e.g., [18,45), which found that social circumstances and cultural practices promote people's motivation. Accordingly, the differences in individuals' autonomy based on their work motivation will be enhanced if they belong to nations with higher political participation and humane orientation. Fourth, the association between social relatedness and work motivation is moderated by humane orientation. Accordingly, in a humane-oriented society, the differences in individuals' social relatedness based on their work motivation will be strengthened. The remaining findings were contrary to the original propositions. Pinder (2014) [20] argued that it is possible to find that contextual practices can influence variables at the individual level in the opposite prediction in motivation research. Fifth, individuals' competence negatively influences their work motivation. This finding proposes that more competent individuals are less motivated at work. One possible interpretation of this opposite result is that, when the majority of the organization members recognize individuals' competence, these individuals may perceive that it is not necessary to devote most of their time and energy to work anymore. These individuals may believe that no matter how unwillingly they perform, they are still competent enough because of their prior achievements. Additionally, competent individuals recognize that they have already sacrificed their enjoyment of life for their previous successes; therefore, they tend to offset this by investing their valuable time in other aspects. This is consistent with other researchers' investigations (e.g., [109]), which found that low-skilled individuals are more often compelled to engage in regular work activities and are more easily motivated than others. By contrast, highly competent individuals tend to be motivated by challenging tasks and improving themselves through further education. Sixth, the relationship between competence and work motivation is negatively moderated by religious affiliation. This finding suggests that religious affiliation weakens the association between individuals' competence and work motivation. One possible explanation for this finding is that strong religious beliefs are the foundation for virtuous living [110]. Individuals with religious affiliation usually employ religious principles to guide their behavior, regardless of their competence. In other words, both competent and incompetent individuals tend to be more motivated at the workplace if they are affiliated with any religion, thereby diminishing the influence of competence in work motivation. Seventh, the relationship between social relatedness and work motivation is negatively moderated by in-group collectivism. This result proposes that a higher degree of in-group collectivism weakens the association between individuals' social relatedness and work motivation. One possible explanation for this is that, under an in-group collective society, people put more weight on mutual relationships and encourage acts that may build up the solidarity of groups. Since in-group collectivism is viewed as a social attachment in which people emphasize the group over the self (e.g., [79][80][81]), individuals are fairly conscious of their responsibility to the group regardless of their social relatedness. Both socially related and unrelated individuals belonging to in-group collective cultures tend to work harder for common goals. Accordingly, the influence of individuals' social relatedness on their work motivation is reduced. Limitations and Future Research Despite its significant contributions, this study has its limitations. The use of secondary data represents the fact that the data collection process was beyond the authors' control. However, the collection of cross-national data is time-consuming and costly. The authors used the available data but strove for the efficient use of multilevel data. The secondary data also limited the measurement of individual-level factors based on the available data. Moreover, it is quite complex to gauge an individual's work motivation appropriately, since personal work motivation may not be one-dimensional. Nevertheless, the authors made efforts to employ the measurements utilized by prior research. Moreover, it is complicated to measure social factors such as political participation. There are challenges in investigating social contexts due to the absence of direct measurements [111]. This compels the authors to identify substitute measurements for this study. Finally, this study covered 25 samples from 25 countries with different characteristics. Despite the attempt of this study to include the most relevant social conditions in the framework, the influence of other national differences and cultural sensitivities were not considered. This paper directs further research considering that several frameworks and approaches should be employed to better examine motivation [112]. First, as some of the results were opposite to the original propositions based on the theoretical foundations employed, combining different concepts and approaches is necessary to enhance perspectives of psychological needs and social issues. For instance, the relationship between competence and work motivation can be further investigated by employing other theories to understand their association better. Similarly, the moderating effects of social contexts such as religious affiliation and in-group collectivism should be further examined to obtain a more in-depth comprehension of the roles of contextual circumstances and cultural values in individual-level relationships. Additionally, self-determination theory and the concept of prosocial motivation may be used to explore motivation towards specific behavior in organizations, such as organizational citizenship and proactive behaviors. Organizational context, such as rewards, training, and culture, can be considered as part of the framework to enhance the conception of work motivation. Conclusions This study has utilized a multilevel framework to examine the influence of psychological needs and social context on work motivation. Through this research, a deeper understanding of the roles of competence, autonomy, and social relatedness, as well as social situations and cultural values on work motivation, is achieved. The contrary findings call for integrating other concepts and approaches towards a more comprehensive knowledge of work motivation. Along with the theoretical contribution, the study's findings offer practical implications. The satisfaction of psychological needs promotes self-motivation, which creates positive outcomes. Hence, organizations can provide programs and activities to promote employees' autonomy and social relatedness as this will enhance their work motivation. Employee empowerment can be advocated by encouraging them to make their own decisions at the workplace, providing constructive criticisms rather than instilling the fear of failure. Additionally, managers should encourage solidarity, support, and mutual care among employees. Putting more weight on employees' fulfillment of needs will further increase employees' motivation, thereby diminishing costs related to stress or turnover [50]. To establish a novel mechanism towards promoting work motivation in the entire nation, the government should pay attention to the political structure and conditions that encourage citizens' participation. Additionally, a culture of humane orientation should be promoted in the workplace and society so that solidarity, kind assistance, and altruism among communities as well as among individuals can be strengthened. For instance, teamwork should be encouraged for employees to help each other overcome difficulties at the workplace or share responsibilities with their colleagues. This will motivate people to work harder for collective goals, contributing to the development of organizations.
2022-02-18T16:05:39.362Z
2022-02-01T00:00:00.000
{ "year": 2022, "sha1": "bbe6dd2d7f60c977fefc1ae8c31c8767d6cfd874", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-328X/12/2/49/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dbcc620946b021a1f119f30478b1c5a99915225a", "s2fieldsofstudy": [ "Psychology", "Business", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
5781514
pes2o/s2orc
v3-fos-license
Differential interactions between Notch and ID factors control neurogenesis by modulating Hes factor autoregulation During embryonic and adult neurogenesis, neural stem cells (NSCs) generate the correct number and types of neurons in a temporospatial fashion. Control of NSC activity and fate is crucial for brain formation and homeostasis. Neurogenesis in the embryonic and adult brain differ considerably, but Notch signaling and inhibitor of DNA-binding (ID) factors are pivotal in both. Notch and ID factors regulate NSC maintenance; however, it has been difficult to evaluate how these pathways potentially interact. Here, we combined mathematical modeling with analysis of single-cell transcriptomic data to elucidate unforeseen interactions between the Notch and ID factor pathways. During brain development, Notch signaling dominates and directly regulates Id4 expression, preventing other ID factors from inducing NSC quiescence. Conversely, during adult neurogenesis, Notch signaling and Id2/3 regulate neurogenesis in a complementary manner and ID factors can induce NSC maintenance and quiescence in the absence of Notch. Our analyses unveil key molecular interactions underlying NSC maintenance and mechanistic differences between embryonic and adult neurogenesis. Similar Notch and ID factor interactions may be crucial in other stem cell systems. Summary: Computational analysis of transcriptome data from neural stem cells reveals key differences in the synergistic interactions between Notch and inhibitor of DNA-binding factors during embryonic and adult neurogenesis. INTRODUCTION Neurogenesis is the production of neurons from neural stem cells (NSCs). The correct balance between NSC proliferation and differentiation is essential for embryonic formation of the brain and to confer regenerative capacities in the adult brain (Doe, 2008). Any deviation from the regulated neurogenic program can lead to drastic problems during development, including microcephaly and cognitive impairment. During embryonic development of the central nervous system, NSCs divide frequently and produce neurons either directly or via a committed intermediate progenitor (IP) cell (Fig. 1A). In the peak neurogenic period, a few NSCs exit the cell cycle and become quiescent (qNSCs) (Furutachi et al., , 2015Fuentealba et al., 2015). qNSCs are only reactivated in the adult neurogenic niches. In the adult brain, NSCs remain and neurogenesis is active in two defined regions: the ventricularsubventricular zone (V-SVZ) of the lateral ventricle wall; and the dentate gyrus of the hippocampus Doetsch, 2003;Spalding et al., 2013;Ernst et al., 2014;Fuentealba et al., 2015;Furutachi et al., 2015). In the adult brain the majority of the NSCs are mitotically inactive (qNSC) and infrequently enter cell cycle, becoming active NSCs (aNSCs) to generate neurons before returning to quiescence or differentiating into glial cells ( Fig. 1A) (Lois et al., 1996;Kirschenbaum et al., 1999;Encinas et al., 2011;Ihrie and Alvarez-Buylla, 2011;Shook et al., 2012;Giachino et al., 2014). Thus, although embryonic and adult neurogenesis share some similarities, there are also fundamental differences and stem cell quiescence is one of them. Currently, it is not known why NSCs of the adult brain remain quiescent and the mechanisms that control the transition of NSC to activation are also unclear. However, the balance between activity and quiescence is crucial not only to maintain the NSC pool for later neuron production and regeneration but also to prevent overproliferation and tumor formation (Lugert et al., 2010;Silva-Vargas et al., 2016). Thus, understanding the molecular mechanism that regulates maintenance and differentiation of NSCs is not only of theoretical interest but crucial for understanding disease mechanism and developing new therapeutic strategies (Lie et al., 2004;Lazarov et al., 2010). The processes of NSC maintenance and differentiation are controlled by a core regulatory network of basic helix-loop-helix (bHLH) transcription factors (Lee, 1997;Ross et al., 2003;Heng and Guillemot, 2013;Imayoshi and Kageyama, 2014b). Members of the bHLH family have two conserved functional domains: a basic region for DNA binding and a helix-loop-helix (HLH) region for dimerization. These transcript factors can act as repressors or activators of gene expression. The hairy and enhancer of split (Hes) proteins Hes1 and Hes5 are central repressors of NSC differentiation during brain development (Ohtsuka et al., 1999;Kageyama et al., 2007Kageyama et al., , 2008, while bHLH factors including Ascl1 and Neurog2 are activators of neural differentiation and thus referred to as proneural factors (Wilkinson et al., 2013;Imayoshi and Kageyama, 2014b). Hes proteins in conjunction with TLE factors repress gene expression by binding to N-box and class-C sites in the promoters of target genes. Proneural factors activate gene expression by binding to E-box consensus sequences in the promoters of their targets (Fig. 1B) (Imayoshi and Kageyama, 2014b). Furthermore, the binding affinity of proneural factors to E-boxes can be enhanced by the formation of heterodimers with other members of the bHLH family: the E-proteins Tcf4 and Tcf3 (Massari and Murre, 2000;Bohrer et al., 2015). During brain development, Notch signaling activates Hes gene expression, which in turn inhibits NSC differentiation by repressing proneural genes, including Ascl1 and Neurog2 (Lee, 1997;Heng and Guillemot, 2013). In addition, Hes proteins repress expression of their own genes, counteracting Notch and leading to oscillations in their expression ( Fig. 1B) (Hirata et al., 2002). This dynamic Hes gene activity has been suggested to result in low-level expression of proneural factors, including Ascl1, and this low expression drives cell cycle progression but is not sufficient to induce NSC differentiation (Castro et al., 2011;Imayoshi et al., 2013;Andersen et al., 2014). In contrast, high and sustained expression of Hes proteins drives complete repression of Ascl1, leading to cell cycle exit and NSC entry into a quiescent state (Baek et al., 2006;Castro et al., 2011;Imayoshi et al., 2013;Andersen et al., 2014). NSCs in the adult brain niches are predominantly quiescent, a state not observed frequently in the developing brain. How regulation of the Hes-proneural gene axis is differentially controlled in NSCs during development and in the adult brain is unknown. However, previous observations suggest that different levels of proneural activity in the NSCs lead to three possible output states in the NSCs: NSC quiescence, proliferation and differentiation when proneural activity is absent, intermediate/low and high, respectively (Fig. 1A). In the adult brain, NSCs quiescence has been linked to the expression of inhibitor of DNA-binding factors (IDs) (Nam and Benezra, 2009). IDs also have a HLH domain, which enables the formation of heterodimers with other bHLH factors, but lack the basic domain and for this reason cannot efficiently bind to DNA (Tzeng, 2003;Heng and Guillemot, 2013). Therefore, IDs act as inhibitors of the activity of bHLH factors. Experimentally, IDs have been shown to form dimers with Hes proteins and these heterodimers are unable to bind to the N-box-binding motifs in the Hes promoter and thus relieve Hes auto-repression (Bai et al., 2007). Interestingly, Hes-ID heterodimers can still repress target genes, including Ascl1, via class-C binding sites, albeit with lower efficiency than Hes homodimers (Bai et al., 2007). Thus, IDs are able to segregate auto-repressive and downstream target gene repression functions of Hes factors. In addition, IDs also form ineffective heterodimers with proneural factors, including Ascl1, reducing their potential to drive differentiation by blocking their binding to E-boxes in target genes (Imayoshi and Kageyama, 2014b). Hence, IDs potentially regulate neurogenesis at multiple levels, including enhancing Hes expression and blockage of proneural factor activity (Fig. 1C). Owing to the complex and reciprocal interplay between Notch-Hes and IDs, it has been challenging to access the consequences of their interactions experimentally and their respective roles in the control of NSC activity. As a first step to address this problem, we developed a specific theoretical framework that takes into account the interactions between Notch, IDs and the members of the bHLH family of transcriptional factors. Our theoretical framework is in line with previous models of Hes (Lewis, 2003;Monk, 2003;Novák and Tyson, 2008;Wang et al., 2011;Pfeuty, 2015) and explicitly incorporates Notch-mediated activation of Hes gene expression, Hes-mediated repression of proneural expression, Hes autorepression and homodimer formation. In order to recapitulate the different effects of IDs, we incorporated the possibility of Hes-ID and proneural-ID heterodimer formation into the model. We explored computationally the properties of this gene regulatory network and the conditions required to obtain NSC quiescence, maintenance of activated NSCs and differentiation. Once we had established a robust model that fulfilled these criteria, we challenged and validated our predictions by analyzing the gene expression of NSCs at the single-cell level. Finally, by evaluating the differences in the single-cell expression profile of adult and embryonic NSCs, we uncovered key differences between embryonic and adult neurogenesis. RESULTS Notch signaling alone cannot completely repress proneural activity and drive NSC quiescence The balance between Notch signal activity and proneural factor expression is pivotal in the regulation of NSC activity and neurogenic differentiation. Proneural factors, including Ascl1, are important for neuronal differentiation but at lower transient levels also induce NSC cell cycle entry. Complete repression of Hes expression is a prerequisite for proneural gene expression to levels that induce Fig. 1. The NSC differentiation processes in the embryonic and adult brain and its regulatory network. (A) NSC fate in the embryonic and adult brain is dependent on the levels of proneural transcription factor expression. During embryonic neurogenesis, the majority of the NSCs are in a mitotically active state (aNSC) while a few will enter quiescence (qNSC) and remain inactive until adulthood. In the adult neurogenic niches, most NSCs are mitotically inactive (qNSC) and rarely transit to the mitotically active, neurogenic state (aNSC). In aNSCs, low levels of proneural activity drive cell cycle progression but is insufficient to induce differentiation. In the absence of proneural transcription factor activity, NSCs are quiescent (qNSC) and high proneural transcription factor activity drives neural differentiation (Diff ). (B) The Notch-Hes-Proneural transcription factor interaction network. Notch signaling through the DNA-binding protein Rbpj activates expression of Hes genes. Hes protein homodimers repress proneural gene expression, including Ascl1 and Neurog2 via N-box and class-C sites, and their own expression by binding to Nbox sites in their promotor regions. Proneural transcription factors activate cell cycle progression and differentiation via E-box sites. (C) The current known Notch-Hes-IDs-proneural interactions. IDs form heterodimers with Hes transcription factors, which are unable to bind to N-box sites but can bind to class-C sites, although with lower efficiency than Hes homodimers. IDs also form heterodimers with proneural factors that are unable to activate the differentiation and cell cycle progression genes. progenitor cell commitment and differentiation. By contrast, proneural gene expression is completely repressed in qNSCs and is necessary for NSCs to exit cell cycle (Castro et al., 2011;Imayoshi et al., 2013;Andersen et al., 2014). Therefore, we initially studied the dynamics of the Notch/Hes regulatory module in NSCs in the absence of any extra factor (Fig. 1B). To be consistent with previous theoretical and experimental evidence (Hirata et al., 2002;Monk, 2003;Wang et al., 2011), we adjusted our model such that Hes gene expression oscillates with a periodicity of 2-3 h in the presence of a Notch signal (see Materials and Methods; Fig. 2A,B). We then evaluated mathematically the effect of Notch activity on the levels of proneural gene expression. In agreement with experimental data, our model recapitulated that increasing Notch signaling decreases proneural gene expression (Fig. 2C). However, Notch signaling alone was unable to completely suppress proneural gene expression or even reduce it to the levels necessary for cell cycle exit (NSC quiescence) (Fig. 2C). We evaluated whether complete repression of proneural gene activity could be achieved by increasing the basal production rate of Hes and maximal levels of Hes by increasing Notch activity (Fig. 2D). Surprisingly, an increase in Hes production had little effect on the levels of proneural gene expression. Moreover, changes in Notch/Hes signaling leads to binary fates: high Notch leads to low-intermediate proneural activity that induces proliferation and active NSCs; low Notch leads to high proneural activity that results in differentiation (Fig. 2D). Therefore, our computational results suggest that, in the absence of any extra factor, Notch signaling cannot completely repress proneural activity and thereby induce NSC quiescence. Interestingly, this situation is seen during embryonic development where most NSCs of the developing brain are mitotically active. Hes-negative feedback, and not oscillations, leads to the accumulation of the minimal proneural activity required for NSC proliferation As quiescence is the major state of NSC in the adult brain, we investigated the conditions required for a complete repression of proneural gene expression. It has been suggested that the oscillatory expression of Hes leads to low levels of proneural factor expression that enable and drive NSC proliferation while being insufficient for differentiation (Kageyama et al., 2009;Imayoshi et al., 2013;Imayoshi and Kageyama, 2014a). In order to investigate the impact of oscillations on proneural expression, we explored the regions of the parameter space where Hes expression no longer oscillates (Fig. S1). Rapid Hes protein degradation, long intronic delay and Hes autorepression have all been shown to be required for oscillatory Hes expression (Bai et al., 2007;Yoshiura et al., 2007;Takashima et al., 2011). Analytical studies of Hes oscillations have identified intronic delay and protein degradation as key parameters that modulate Hes dynamics (Ay et al., 2013(Ay et al., , 2014Wang et al., 2014). Therefore, we explored how changes in these parameters would affect the expression level of the downstream proneural genes. We initially evaluated the effects of changing Hes protein halflife on the oscillatory behavior of Hes gene expression (Fig. 3A). Increasing Hes half-life substantially above the experimentally determined 22 min resulted in loss of oscillations and sustained Hes mRNA expression levels (Fig. 3A). Similarly, reducing Hes protein half-life shortened the periodicity of the oscillations (Fig. 3A). These findings are consistent with experimental evidence suggesting that both increases or decreases in Hes protein degradation rate stops or dampens Hes mRNA oscillations, and validates our mathematical framework (Yoshiura et al., 2007;Harima et al., 2014). Interestingly, however, we found that the average levels of proneural mRNA expression do not change with changes in Hes protein degradation rate and does not lead to a complete block in proneural gene levels ( Fig. 3B). Hence, regulation of Hes protein degradation in NSCs cannot account for cell cycle exit and quiescence. Therefore, we examined the effect of intronic delay of transcript maturation rate on oscillatory Hes gene expression, proneural gene activity and NSC fate. It has been shown that Hes genes have three long introns that regulate transcription and splicing of the primary RNA transcript and maturation of the Hes7 mRNA to ∼19 min (Harima et al., 2014). Removal of one or two of the Hes gene introns increases Hes transcript maturation rate and dampens or completely abolishes oscillations in expression (Takashima et al., 2011;Harima et al., 2014). Consistent with this, in our model we observed that the oscillatory behavior of the Hes genes is absent and expression is sustained with rapid transcript production and maturation. Conversely, a longer delay in Hes mRNA maturation leads to an increase in the periodicity of oscillations at most levels of Notch signal activity (Fig. 3C). Our model also predicts that proneural expression is not affected by changes in Hes transcript maturation ( production) rate and that it has little or no effect on NSC fate (Fig. 3D). Therefore, we examined the effect of Hes auto-repression on Hes mRNA oscillatory period, proneural gene expression and NSC fate. We considered an auto-repression factor value of 0.0 to represent a complete repression of Hes gene expression by the Hes proteins. Conversely, an auto-repression factor value of 0.05 represents Hes gene repression of 95% with a residual transcription of 5% (Fig. 3E). Under these conditions, a small relief of Hes autorepression is enough to completely abolish Hes mRNA oscillations and induce sustained Hes gene expression over most of the range of Notch signal (Fig. 3E). This also leads to a substantial decrease in the expression of the proneural genes (Fig. 3F). So our modeling of the Notch network revealed a mechanism by which relatively small reductions in the efficiency of Hes gene auto-repression leads to dramatic changes in proneural gene expression. Furthermore, the modulation of Hes auto-repression predicts a relative transcriptional profile of Notch, Hes and proneural factor genes that determines NSC differentiation, quiescence and active NSC maintenance. Together, these results suggest that the Hes auto-repression restricts the effect of Notch signal on proneural expression, leading to the maintenance of a minimum level of proneural activity even in the presence of high Notch signaling. Therefore, the efficiency of the Hes-negative feedback, and not Hes oscillatory expression per se, is crucial for the small accumulations in proneural expression required for NSC proliferation and differentiation. IDs potentiate Notch/Hes activity and drive complete repression of proneural expression Our mathematical model confirms the crucial importance of regulating Hes auto-repression in order to enable a complete repression of proneural expression and drive NSC quiescence. This is an important finding as it provides a mode for modulating NSC behavior to achieve the transition from NSC quiescence to activation and differentiation. Although IDs have been shown to regulate neurogenesis and NSC activity as well as alter Hes and proneural factor activity, it remains unclear how the combination of IDs, Hes and proneural factor activity achieve the effects observed in genetic manipulation experiments (Baek et al., 2006;Nam and Benezra, 2009). We then incorporated IDs into our theoretical framework and observed that by reducing Hes auto-repression, IDs lead to an increased and sustained expression of Hes genes (Fig. 4A). Thus, Hes expression levels are uncoupled from Notch signal intensity by simply increasing expression of ID proteins, which also results in non-oscillatory expression and high levels of Hes (Fig. 4A). In addition, by evaluating the effects of Notch and IDs on Hes, we found that Hes gene expression only reaches maximum levels in the presence of both Notch signaling and IDs (Fig. 4B). Furthermore, high levels of IDs are able to completely repress proneural gene activity, on the one side by increasing Hes levels to repress transcription either via Hes homodimers or ID-Hes heterodimers, and on the other side by directly interacting with proneural proteins to inhibit target gene activation (Figs 1C and 4C). By combining Notch signals and ID activity, our model then predicts that proneural activity segregates into three expression states that directly translate into different stem cell states: low/absent (D) Levels of proneural factor activity ( protein level) for different levels of Notch activity and ID expression. IDs potentiate the effect of Notch signaling by releasing Hes auto-repression and can act in concert with Notch, forming a three-way switch that segregates NSCs into quiescent (qNSC) (high IDs), proliferative/active (aNSC) (low IDs, high Notch/Hes) or differentiated (Diff ) (low IDs, low Notch/Hes). The oscillatory region is presented in Fig. S2. proneural expression leading to NSC quiescence (qNSCs); low/ intermediate proneural levels in the absence of IDs in cells with active Notch signaling, resulting in NSCs being mitotically active (aNSCs); and high proneural activity in the absence of both Notch signaling and IDs, which drives NSC into differentiation (Diff ) (Fig. 4D and Fig. S2). This model establishes then clear and sharp boundaries in NSC fate that present a feasible explanation for the experimental data linking IDs and Notch signaling to NSC activation and differentiation. We then compared the expression levels of NSC markers (Slc1a3, Nr2e1, Sox9, Vcam1) (Llorens-Bobadilla et al., 2015) and active NSC/progenitor markers (Ascl1, Fos, Egr1, Sox4, Sox11) (Llorens-Bobadilla et al., 2015) by these progenitor subpopulations. As expected, TAPs (Hes low ID low ) expressed relatively lower levels of the NSC markers than Id3 + Hes − , Id3 + Hes + and Id3 − Hes + NSCs (Fig. 5A,B). Interestingly, Id3 + Hes + and Id3 + Hes − NSCs expressed low levels of active NSC/progenitor markers, while Id3 − Hes + cells had high levels of active NSC/progenitor markers. As TAPs are mitotically active and Id3 + NSCs do not express genes associated with an active mitotic state, these findings indicate that Id3 + cells are mostly quiescent (blue in Fig. 5A,B), whereas most Id3 − Hes + cells also express markers of proliferation and TAPs (red in Fig. 5B). We come to similar conclusions when we consider the expression of Id2 in place of Id3, or even when considering the expression of all Id genes together (Figs S5 and S6), and when we use alternative NSC and proliferation markers (Figs S7 and S8). The analysis of these experimental data suggest that IDs are sufficient to drive NSC quiescence, which lends direct support to our theoretical model indicating that ID expression and not Notch signaling itself is crucial for NSC quiescence (Fig. 5C) (Nam and Benezra, 2009). Or, put another way, ID expression represses adult NSCs from entering the mitotically active state by blocking proneural gene expression and activity. Mechanistically, our results suggest that downregulation of IDs by qNSCs releases the complete repression of proneural activity induced by sustained highlevel Hes expression, and promotes cell cycle entry, at least in part, by enabling proneural factor expression. Our data analysis also indicates that active NSCs express the Notch ligand Delta (Fig. 5B). Notch-Delta signaling leads to lateral inhibition between neighboring cells, segregating them into two populations: receiver cells with high Notch/ Hes and low Delta expression; and sender cells with low Notch/Hes and higher Delta expression. Sender cells further differentiate into TAPs while receiver cells remain as active NSCs and can go through another round of proliferation (Fig. 5C). According to our model, NSC differentiation could be controlled by IDs even in the absence of Notch Fig. 4D and on experimental results presented in B. High levels of IDs drive NSC quiescence. By decreasing IDs, the NSC become proliferative and stimulate the expression of the Notch ligand Delta. Notch-Delta lateral inhibition segregates neighboring active NSCs into high and low Notch signal. While the NSC with low Notch differentiates into a TAP cell, the NSCs with high Notch remain proliferative and can go another round of differentiation. Similar results are found using Id2 instead of Id3, or considering all IDs (Id1-Id4) together, and for an alternative choice of NSC and proliferative markers (Figs S5-8). signaling. By lowering the levels of IDs in qNSCs, they can become active/proliferative. A further decrease in the levels of IDs would then lead to differentiation and ultimately to depletion of the NSC population. This is consistent with experimental evidence showing that inhibition of Notch signaling in the adult niche leads to the loss of stem cell population and highlights the crucial role of Notch signaling in controlling NSC maintenance and differentiation (Chapouton et al., 2010;Imayoshi et al., 2010;Basak et al., 2012;Kawaguchi et al., 2013). Differences between ID-Notch regulation of embryonic and adult neurogenesis Our modeling results show a feasible mechanism by which Notch and IDs collaborate to regulate neurogenesis in a complementary manner. However, during embryonic stages of brain development, most NSCs are mitotically active although they express IDs (Yun et al., 2004). Therefore, we addressed the differences in Notch/Hes and ID interactions in the embryo and adult NSCs, and the reason why Notch-Hes-ID induce quiescence of NSCs in the adult V-SVZ but not during embryonic development. We analyzed publically available expression datasets of single embryonic progenitor cells in the ventricular zone of both murine and human brain (Kawaguchi et al., 2008;Pollen et al., 2015). We found that the cells could be divided into two populations: proliferative NSCs that have high levels of both Hes and radial glial markers (embryonic NSCs; Slc1a3, Pax6, Sox2, Pdgfd and Gli3), and basal intermediate progenitors that express low levels of these radial glial markers and high levels of intermediate progenitors markers (Tbr2, Elavl4, Neurog1, Neurod1, Neurod4, Ppp1r17, Penk) (Fig. 6A,C and Fig. S9). We then evaluated the expression profile of IDs (Id1-4) in these different cells. We found that Id2 and Id4 were expressed by a significant fraction of cells and that, in contrast to the situation in adult NSCs, Id4 but not Id2 expression overlapped with the expression of Hes genes and radial glia markers (Fig. 6B,D). The function of Id4 has been identified as a paradigm shift compared with Id1-3 in different tissues during development and in cancer (Patel et al., 2015). Id4 but not Id1, Id2 and Id3, is a target of Notch/Rbpj signaling (Li et al., 2012). Consistent with this, we observed an overlap in expression between Id4 and Hes genes in embryonic progenitors (Fig. 6). In addition, it has been suggested that Id4 can inhibit the function of other IDs (Sharma et al., 2015). This suggests that Notch regulates the expression of both Hes and Id4 in embryonic NSCs, and that Id4 blocks the inhibitory function of Id1-3. Thus, the predominance of Id4 over Id1-3 during embryonic neurogenesis is a key difference between embryonic and adult NSCs. This suggests that the expression of Id4 induced by Notch signaling prevents Id1-3-modulated repression of Hes autoregulation, thus preventing the sustained high level expression of Hes genes that is required to block proneural gene expression and drive cell cycle exit. In addition, the inhibitory effect of Id4 on the other ID proteins also blocks Id1-3-mediated inhibition of the proliferation. Consistent with this mechanism, neural progenitor cells in Id4 mutant mice show prolonged G1-S transition during brain development (Yun et al., 2004). Moreover, Id4-mutant neural progenitor cells also show precocious differentiation, suggesting that Id4 also plays a role in regulating NSC differentiation, likely via sequestration of proneural factors (Yun et al., 2004). DISCUSSION The regulation of stem cell fate is highly complex. In the nervous system, an ever-increasing number of factors that can change NSC activity and fate are being uncovered. Signaling pathways downstream of many of these factors have either been shown to converge, synergize or even counteract each other. Notch signaling is a central regulator of NSC fate and plays key roles in regulating maintenance, proliferation and differentiation (Nyfeler et al., 2005;Ehm et al., 2010;Imayoshi et al., 2010;Lugert et al., 2010;Basak et al., 2012). The best known mode of Notch activity is to suppress expression of the proneural genes through expression of Hes factors (Lutolf et al., 2002). Hence, deletion of the core DNA binding component of the Notch pathway, Rbpj, or the effectors Hes1 and Hes5, leads to NSC activation and precocious differentiation (Ehm et al., 2010;Imayoshi et al., 2010;Lugert et al., 2010;Basak et al., 2012). Similarly, ID proteins control NSC maintenance and activation in the developing nervous system and control NSC quiescence and fate in the adult brain (Nam and Benezra, 2009). IDs are downstream components of the TGFβ pathway but share bHLH transcriptional regulators of the Hes and proneural family as common targets with the Notch pathway (Viñals et al., 2004;Yun et al., 2004;Bai et al., 2007;Nam and Benezra, 2009). IDs form heterodimers with bHLH factors, which produces inactive complexes with most partners. However, heterodimers of Hes and IDs retain some activity, particularly at the promoters of proneural factor targets (Bai et al., 2007). It has been a major challenge to understand how Notch signaling controls fate. Until recently Notch signaling was considered to be a (Kawaguchi et al., 2008). Color represents the expression levels of Hes genes (Hes1, Hes5) and radial glia (embryonic NSC) markers (Slc1a3, Pax6, Sox2, Pdgfd, Gli3). (B) Color represents the expression level of Id1-Id4 genes (log2 scale). (C) PCA representation of single cells from the ventricular zone of the human embryo (Pollen et al., 2015). Color represents the expression levels of Hes genes (Hes1, Hes5) and radial glia markers (Slc1a3, Pax6, Sox2, Pdgfd, Gli3). (D) Color represents the expression level of Id1-Id4 genes (log2 scale). molecular switch, activated by lateral signaling between neighboring cells. More recently, it was found that, rather than being a switch, Notch signaling is highly dynamic; in addition, in-built oscillatory expression of the Notch target genes Hes1 and Hes5 is crucial to Notch function. The dynamic expression of Hes proteins projects onto the proneural genes whose expression oscillates out of phase with the Hes genes. During embryonic neurogenesis, Notch signaling and a dynamic expression of Hes genes and proneural genes is the predominant mechanism regulating NSC maintenance and differentiation; both show a salt-and-pepper pattern in NSCs of the VZ. This dynamic Notch activity coincides with most NSCs being mitotically active, whereas neurogenic differentiation is blocked by repressing proneural gene expression to low levels and preventing accumulation of these neurogenic factors to levels sufficient for differentiation. The relationship between Hes oscillations and NSC proliferation is intriguing and raises the possibility that NSC fate is also based on the dynamic behavior of Hes proteins. Our results suggest that the low proneural factor activity required for NSC proliferation, can be generated independently of Hes oscillations, although oscillations might provide a robust way of keeping proneural factor levels low. However, we do not exclude the possibility that Hes oscillations lead to an alternative mechanism independent of proneural factor expression through which NSC fate can be controlled. Here, we show that this dominance of Notch signaling in embryonic NSCs is due to synergy between Notch signaling and Id4. Id4 expression correlates strongly with Hes gene expression (and Notch activity) in both human and mouse progenitors isolated from the embryonic brain. Furthermore, both Hes gene and Id4 expression are enriched in NSCs rather than TAPs and more committed cells. These findings parallel data indicating that Id4 is a transcriptional target of Notch signaling (Li et al., 2012). Therefore, in embryonic NSCs Id4 blocks proneural activity and target gene activation while enabling and supporting Hes auto-repression and maintaining oscillations in Hes gene expression. We propose that this is a key requirement during embryogenesis, as the dynamics in Notch and proneural activity enable a rapid transition of the NSCs to differentiation during brain development. In addition, our model proposes that Id2 and Id3, although expressed by NSCs and committed progenitors during embryonic development are outcompeted for common partners by Id4, preventing the formation of inactive Hes-Id2/3 heterodimers (Sharma et al., 2015). Conversely, in the adult nervous system, most NSCs are mitotically inactive and, in contrast to embryonic NSCs, Notch signaling does not oscillate and proneural gene expression is absent in most NSCs, except those that activate and enter cell cycle (Imayoshi and Kageyama, 2014a). The mechanism that inhibits the oscillatory expression of the Notch effectors Hes1 and Hes5 has been elusive. However, it has been demonstrated that high level Notch signaling and maintained expression of Hes1 are required to induce NSC quiescence and this correlates with ID expression (Hirata et al., 2002;Nam and Benezra, 2009;Imayoshi et al., 2013). How both processes are regulated and how Notch switches from promoting quiescence to blocking differentiation of aNSCs is unclear (Chapouton et al., 2010;Imayoshi et al., 2010;Lugert et al., 2010;Basak et al., 2012). We show computationally for the first time that the synergistic and antagonistic interactions between Hes factors and IDs are central to both of these mechanisms during adult neurogenesis. We modeled computationally different paradigms to test how the oscillatory expression of Hes could be modulated and found that reducing the auto-repression on the Hes promoter was more effective in stabilizing Hes gene expression than changing Hes protein stability or even increasing its expression levels. Our results suggest that Id2 and Id3 form ineffective heterodimers with Hes proteins and reduce the autorepressive feedback on the Hes promoters in qNSCs. This results in increased Hes protein expression and complete repression of the proneural genes. Hence, although Id2/3 can repress proneural factor activity on their target genes, our model indicates that their major role in inducing NSC quiescence is to relieve Hes auto-repression. In summary, computational modeling has enabled us to reconcile different experimental findings and data and provide a solid hypothesis for how Notch and IDs work together to regulate neurogenesis in the embryonic and adult brain. It is astonishing that the predictions of our model were supported by gene expression data both from mouse and human. Furthermore, our model demonstrates key differences in Notch and ID activity in embryonic and adult NSCs, and provides an explanation for the quiescent state observed in adult NSC. Why adult NSCs are predominantly quiescent although they remain sensitive to Notch signaling is a central question in the adult neurogenesis field. Hence, the use of mathematically modeling and validation by analysis of unbiased sequence data allowed us to unravel a complex cross-regulatory network mechanism that has been difficult to address experimentally. However, as our model now uncovers the multiple potential modes of action of ID proteins in the regulation of Notch activity and NSC fate, it will be important to address these complex Hes-ID interactions and their functions using cell biological and genetic experiments in vivo. Because IDs can potentially modulate any bHLH factor, it is expected that these factors play different and even opposite roles in different systems. Therefore, understanding the role of ID interactions with other factors will require development of specific theoretical frameworks. This will allow novel hypotheses of complex biological processes to be developed that can be examined and validated with currently available data or specifically designed experimental approaches. Theoretical framework Our model is inspired by a reductionist approach proposed by Julian Lewis in 2003 to study the oscillatory dynamics of Hes/her genes (Lewis, 2003). Lewis introduced a simple model that captures many key features of Hes/ Her oscillatory dynamics during somitogenesis and served as a theoretical foundation of many other mathematical models that was further developed to elucidate Hes/Her oscillatory dynamics in different tissues (Lewis, 2003;Monk, 2003;Novák and Tyson, 2008;Wang et al., 2011;Pfeuty, 2015). Therefore, we considered the dynamics of Hes mRNA (m) and Hes protein ( p) to be regulated by auto-inhibition with a transcriptional delay (Eqns 1,2). As we observe that the expression levels of the Hes genes can reach a maximum of 2 12 reads per million (RPM) (Fig. 5A). We assumed that one cell expresses a total of 0.5 million mRNA molecules and thus a maximum of 2 11 (∼2000) Hes transcripts per cell. We used the following model of mRNA regulation: where m 0 is the mRNA transcription rate and b is the degradation rate. In the equilibrium dm/dt=0, therefore, m=m 0 /b. The Hes mRNA half-life has been measured to be in the order of 20 min, resulting in b being ∼1/20=0. Because the values of the Hill functions are always <1, this leads to a decrease in the effective transcript production rate. In the presence of typical levels of Notch signal and Hes auto-repression, this leads to an effective transcription rate of a few dozens transcripts per minute. In order to incorporate the effect of IDs, we expanded this model by incorporating the formation of Hes-Hes homo-dimers ( p 2 ) and Hes-ID hetero-dimers ( p id ) (Eqns 3,4): where the positive and negative Hill functions are given, respectively, by: and the parameters x 0 and n are the Hill factor and Hill coefficient, respectively. We considered that Hes mRNA is produced by a rate m 0 and can be modulated positively by an external input I (Notch signal), as represented by a positive Hill function (H + ), and negatively by Hes homo-dimer ( p 2 ), as represented by a negative Hill function (H − ). Similarly, Hes protein is produced by a translation rate of p 0 . Based on experimental measurements, we considered that the half-life of Hes mRNA (g m ) and Hes protein (g p ) are 24 and 22 min, respectively (Hirata et al., 2002). For simplicity we assumed that the degradation rate of Hes monomer, Hes-Hes homo-dimer and Hes-ID hetero-dimer are the same (g p ). We also assumed a high affinity between Hes monomers, and between Hes and ID monomers, represented by the variable k. The maturation of Hes mRNA has been shown to be delayed by 19 min due to intronic processing (Harima et al., 2014). For simplicity, we considered that this is the only delay involved in the process (t i =19 min), although extra delays are expected due to transport of mRNA from the nucleus to cytoplasm, protein production and dimer formation. Analytical studies have shown that the oscillatory behavior of Hes genes is highly dependent on the intronic delay and Hill coefficient (cooperativity coefficient) (Bernard et al., 2006). The delay and Hill coefficient have contrasting effects on Hes dynamics: longer delays require lower levels of cooperativity in order to maintain oscillations (Bernard et al., 2006). Therefore, by considering only the intronic delay, a high cooperativity was required to obtain oscillations (n=5). Similar dynamics were obtained by considering a delay t i =25 min and Hill coefficient equal to n=4 (Fig. S10). The translation rate ( p 0 ) was chosen so that protein values were in a biological range and oscillations maintained in the order of 2-3 h. We also took into account the dynamics of a target bHLH gene a, which represents a proneural gene such as Ascl1. We considered that this gene is activated at a rate of m a0 and degraded a rate of g a . Experimental evidence suggests that IDs can release Hes gene-mediated auto-repression via Nboxes, but cannot release Hes gene-mediated repression on proneural genes via class-C sites (Bai et al., 2007). Therefore, we considered that this gene is repressed by both Hes-Hes homo-dimers ( p 2 ) and Hes-ID hetero-dimers ( p id ), in contrast to Hes genes, which are repressed only by Hes-Hes homodimers. We also assumed that Hes-ID heterodimers are less efficient than Hes-Hes homodimers, where ε represents the relative strength of repression of Hes-ID when compared with Hes-Hes. We assumed that ε=0.5 (twice the concentration of Hes-ID is required to have the same repressive effect as Hes-Hes). A sensitivity analysis evaluating the effect of each parameter of Hes circuit on the levels of both Hes and proneural factors is presented in Fig. S1. dm a dt ¼ m a0 H À ð p 2 þ ep id Þ À m a g a ð7Þ da dt ¼ a 0 m a À k a ða þ i d Þ À a g a ð8Þ da 2 dt ¼ k a a À a 2 g a ð9Þ We assumed that the levels of proneural factors, represented by the variables m a (expression level) and a 2 (activity level), controls the fate of the NSCs, irrespective of whether Hes gene dynamics are oscillatory or sustained. Whether the oscillatory expression of Hes genes leads to NSC proliferation via a mechanism besides small accumulations of proneural factor activity remains to be determined. Here, we assumed that the mean high, low/ intermediate and low/absent levels of proneural factors drive NSC differentiation, proliferation and quiescence, respectively. It should be noted that our results are qualitative than quantitative in nature. We considered the amount of IDs available to interact with Hes and proneural factors to be constant. However, ID gene expression can be highly dynamic and has been shown to oscillate in other tissues (William et al., 2007). To discuss the dynamics of IDs and their effects on proneural gene expression, we expanded our model by incorporating one extra equation describing the dynamics of the IDs (supplementary Materials and Methods, Figs S11-13). To consider the release of Hes gene-mediated auto-repression (Fig. 3E,F), we replaced the negative Hill function in Eqn 1 with a shifted Hill function H S (x)=H − (x)+f H + (x), where f represents the auto-repression factor or fold change of repression and f=0 represents a complete repression while f=1 represent no repressive effect. Parameter values used in the simulations are shown in Table 1 unless indicated otherwise. The same Hill factor (h 0 ) was used in all Hill functions. Single cell transcriptomics datasets We used recently published datasets in order to validate our model predictions. Two datasets were used to evaluate embryonic NSCs. The first dataset consists of cells extracted from the ventricular zone (VZ) and subventricular zone (SVZ) of human embryos at gestation week 16-18 (GW16-18) (Llorens-Bobadilla et al., 2015). By selecting only the cells from the VZ and with more than 1 million reads, we analyzed 179 cells. The second dataset consists of cells from the embryonic murine brain at E14.5 (Kawaguchi et al., 2008;Pollen et al., 2015). We also evaluated the profile of single cells from the adult murine brain between 8 and 12 weeks of age (Llorens-Bobadilla et al., 2015). We selected all the cells annotated as NSCs (total of 130 cells) and as TAP (total of 27 cells). Cells from the mice with ischemia were not used. Expression levels are represented by the number of reads per million (RPM) in the log2 scale: expression level=log2(RPM+1). Radial glia, NSCs and active NSCs markers were chosen based on the markers suggested by the original manuscript that introduces the database (Llorens-Bobadilla et al., 2015;Pollen et al., 2015). All analysis can be reproduced by following the tutorial source code available on GitHub: http://github.com/mboareto/ InterplayNotchID_neurogenesis.
2017-10-05T18:29:38.833Z
2017-10-01T00:00:00.000
{ "year": 2017, "sha1": "c51ec915867bb66d70ab8331286a8b73dda99c38", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1242/dev.152520", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "34a6342bdf27294a59908e8c82d267131f36e090", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
256175133
pes2o/s2orc
v3-fos-license
How does the perceived green human resource management impact employee’s green innovative behavior? —From the perspective of theory of planned behavior Employees’ green innovative behavior encouraged by enterprises plays an important role in the enterprise sustainable development. The study explores the impact of perceived green human resource management on employees’ innovative behavior. Drawing upon the planned behavior theory, this study examines how perceived green human resource management impact employees’ green innovation behavior. Through three-stage questionnaire survey, 207 samples are obtained and hierarchical regression is employed to test the hypothesis., Data analysis results show that perceived green human resource management has a directly positive effect on employees’ green innovative behavior. Green behavior intention, self-efficacy of environmental protection behavior, and identity with the company’s green environmental protection system are the mediators between perceived green human resource management and employees’ green innovative behavior. Meanwhile, the results demonstrate that there is a chain mediating relationship among these variables. In addition, green supply chain management moderates the relationship between the identity of a green environmental protection system and employees’ green innovative behavior. These conclusions transcend the macro perspective and open the black box between green human resource management and enterprise performance. Enterprise should take a holistic view to play the role of green human resource management and supply chain management in the implementation of environmental strategy. Employees' green innovative behavior encouraged by enterprises plays an important role in the enterprise sustainable development. The study explores the impact of perceived green human resource management on employees' innovative behavior. Drawing upon the planned behavior theory, this study examines how perceived green human resource management impact employees' green innovation behavior. Through three-stage questionnaire survey, 207 samples are obtained and hierarchical regression is employed to test the hypothesis., Data analysis results show that perceived green human resource management has a directly positive effect on employees' green innovative behavior. Green behavior intention, self-efficacy of environmental protection behavior, and identity with the company's green environmental protection system are the mediators between perceived green human resource management and employees' green innovative behavior. Meanwhile, the results demonstrate that there is a chain mediating relationship among these variables. In addition, green supply chain management moderates the relationship between the identity of a green environmental protection system and employees' green innovative behavior. These conclusions transcend the macro perspective and open the black box between green human resource management and enterprise performance. Enterprise should take a holistic view to play the role of green human resource management and supply chain management in the implementation of environmental strategy. KEYWORDS theory of planned behavior, perceived green human resource management, green innovative behavior, system identity, green supply chain management Introduction As the problem of environmental pollution becomes more and more serious, the public pays more and more attention to the environmental problems of enterprises. Faced with increasing environmental pressure, enterprises take measures to development the sustainable operation model from all around (Farrukh et al., 2022). Besides the enterprises' environmental actions, employees' green innovative behavior is the critical force that can help enterprise improve sustainability performance, produce less waste (Davis et al., 2020;Rongbin et al., 2022;Tu et al., 2022). Given that the employees' innovative behavior is self-initiated and not prescribed by organization, enterprise need to identify the contextual and individual antecedents to arouse the employee's motivation to act environmental (Davis et al., 2020;Rongbin et al., 2022;Tu et al., 2022). Green human resources management (GHRM) is one of the most critical measures that can motivates employee to conduct green innovative behavior Napathorn, 2022). GHRM was developed by Wehrmeyer and Vickerstaff (1996) and become a hot research topic in recent years Napathorn, 2022). Many studies have revealed the impact of GHRM on employee's green behavior and performance (Chaudhary, 2020;He et al., 2021;Aboramadan et al., 2022;Tuan, 2022;Ye et al., 2022). Dumont et al. (2017) pointed out that GHRM influences employees' green behavior by constructing a green atmosphere, and personal green values moderates the relationship between a green atmosphere and employees' green behavior. They find only a few scholars explored the relationship between GHRM and employee's green innovative behavior. For example, taking GHRM as a mediator, scholars (Ahmad et al., 2021;Islam et al., 2021a,b) discuss supervisor's ethical leadership style on subordinates' green or pro-environmental work behavior. In the contemporary, to meet the sustainable development goal, employees' environmental protection behavior is not enough for the enterprise's sustainable development. It is imperative for the employee to conduct green innovative behavior, which is initiated by employees, not the enterprise (Li and Wu, 2017). Green innovative behavior plays a crucial role in continuously creating environmental benefits and improving the core competitiveness of enterprises under the pressure of multiple stakeholders (Zhou and Zhang, 2018;Hazarika and Zhang, 2019). Currently, a study (Odugbesan et al., 2022) found that green hard and soft talent management practices have significant influence on employees' innovative work behavior. Scholars (Bhatti et al., 2022) pointed out that GHRM practices and the environmental innovative performance are positively correlated (Chaudhary, 2020;Aboramadan et al., 2022;Tuan, 2022;Ye et al., 2022). However, these studies adopt the macro perspective at the organizational level to elucidate the impact of formulated GHHM on employees' innovative behavior, ignores the gap between the formulated GHRM and the perceived GHRM (Bowen and Ostroff, 2004;Luu, 2021). There is a gap between implementing and perceived HRM (Napathorn, 2022). Employee's green innovative behavior is an individual level concept, while formulated GHRM is an organizational level concept. It is not suitable to directly examine the impact of organizational formulated GHRM on individual innovative green behavior with the use of OLS method. Therefor, it is necessary to examine perceived GHRM role in the HRM-performance relationship and captures the variations due to employee perceptions and interpretations (Bowen and Ostroff, 2004;Sanders and Yang, 2016). And it is necessary to adopt the employee-centric approach to analyze how the perceived GHRM practices drives the employee's green innovative behavior (Paulet et al., 2021). Meanwhile, researches have shown that organizational culture (Sathasivam et al., 2021), perceived environmentallyspecific authentic leadership (Luu, 2021), national institutional and cultural contexts (Rajabpour et al., 2022), and effective communication moderate the relationship between GHRM and environmental sustainability performance. Especially, some scholars have pointed out that green supply chain management (GSCM), as a kind of environmental management strategy, affects the relationship between GHRM and performance (Longoni et al., 2018). Employees' innovative green behavior will inevitably be affected by the company's GSCM strategy. Nevertheless, the impact of GSCM on employees' green innovative behavior is not fully investigated. Therefore, this paper will address three problems to fill the above research gap: first, how does perceived GHRM promote the employee's innovative behavior; second, what is the mediating mechanism of perceived GHRM on employees' green innovative behavior; third, how does green supply chain management, as a core part of an enterprise's green development strategy, moderate the relationship between perceived GHRM and employees' innovative green behavior. Answers to these questions may contribute the literature in three ways. First, drawing on the planned behavior theory (PBT), we provide novel insights on the mechanism which can expound the impact of perceived GHRM on employees' green innovative behavior. Second, we employ employee-centric approach to investigate the impact of GHRM on employee's green innovative behavior, the analysis result will be more robust. And it is conducive to help redirect GHRM research paradigm from the organization level to individual level in line with the HRM research paradigm (Sanders and Yang, 2016;Paulet et al., 2021). Third, we explore the moderation effect of GSCM, which is helpful to deepen the understanding of the situational factors that affect employees' green innovation behavior. This paper is organized as follows: first, introduction section to provide the researching background; second, literature review and reasoning logistic for our hypothesis; third, the methods of the study; fourth, the analysis and results; the last, the discussion and conclusion. The theory of planned behavior (TPB) originated from the theory of reasoned action (TRA) proposed by Ajzen and Fishbein in 1975. TRA holds that behavioral intention (BI) is the direct factor in determining behavior (Yang et al., 2012), and is influenced by behavioral attitude (BA), subjective norms (SN), and perceived behavioral control (PBC) (Trafimow et al., 2002). BA refers to an individual's assessment of how much one likes or dislikes performing a particular behavior, which is usually the most powerful predictor variable of BI. Factors influencing an individual's BA can be divided into endogenous and exogenous attitudes. The former arises from internal traits of individuals, while the latter comes from external stimuli including employee identification and attitudinal disposition in this study. SN refers to the social pressure when individuals consider adopting a particular behavior. Reno et al. (1993) classify subjective norms as injunctive norms, regulating what others think individuals should do, descriptive norms, about their behaviors of themselves, and personal norms or moral norms, regarding what individuals believe they should do. PBC refers to the ease or difficulty with which an individual believes he or she can control and perform a behavior, such as an employee self-efficacy (Hagger and Chatzisarantis, 2005). It relies on both internal control, which is derived from Bandura's self-efficacy theory and external control which is about the facilitation or inhibition of behavior by other factors such as the level of cooperation from colleagues, resources, or time constraints perceived by the individual (Kraft et al., 2005). Green HRM and employees' green innovation behavior GHRM incorporates environmental norms into human resource activities (Renwick et al., 2013;Dumont et al., 2017;Amrutha and Geetha, 2020). It is an environment-focused HRM system, whose aim is to increase employees' awareness, knowledge, skills, and motivation in enterprise's environmental sustainable development (Ren et al., 2018). Green human resource management is a bundle of HRM practices, which combines green management practices and HRM processes, including recruitment and selection, training and development, compensation and benefits, performance management, and employee engagement (Zibarras and Coan, 2015;Ren et al., 2018;Tang et al., 2018). GHRM encourages employees to carry out green behaviors at work (Kim et al., 2019). However, the designed GHRM by the enterprise will not be fully implemented and will be perceived variously by employee due to individuals' personality, attribution style or value (Batt and Hermans, 2012;Sanders and Yang, 2016). Perceived GHRM refers to the perceived GHRM by the employee, regardless of proactive or reactive. It is not the formulated HRM by the enterprise. While an enterprise may design a variety of HRM practices, they are not perceived by the employee for many reasons. These practices will not influence employees. Following this logic, only the perceived GHRM can influence employees (Renwick et al., 2013;Paillé et al., 2014;Ren et al., 2018;Lu et al., 2022) Perceived GHRM is significant predictor of employee behavior (Yusliza et al., 2021). Employees' green innovative behavior refers to individuals' behaviors in the everyday works, including manufacturing new products or providing service (Rongbin et al., 2022). It involves green and novel idea generation, promotion and utilization (Li et al., 2019;Li Y.-B. et al., 2020;Singh et al., 2020). Employees's green innovative behavior has two distinguishing characteristics: proactive and prosocial. The former highlights that it is nonmandatory, discretionary, and self-directed initiative (Dumont et al., 2017;Robertson and Carleton, 2017;Tian and Robertson, 2019;Rubel et al., 2021;Biswas et al., 2022;Munawar et al., 2022). The influence of perceived GHRM on employees' green innovative behavior can be examined with the use of TPB from the perspective of HR practices. The perceived green recruitment and selection practices will make environmental tendencies an important factor in employee promotion, which will boost employees' intention to act environment-friendly. The perceived green training practices will help the employee to form green values and develop the ability to implement green innovative behavior. As a form of subjective norms, it will promote an employee to carry out green innovative behavior with high consciousness and innovative awareness, and is conducive for the employee to develop innovative competency. The perceived green performance management and compensation practices highlight that if employees act with a high characteristic of green innovative behavior, the enterprise will reward them with high-level pay. It will enhance employees' motivation of implementing green innovative behavior. The perceived empowerment and team practices will enable individuals to feel a supportive atmosphere in doing green innovative behavior from others, which is a kind of the subjective norm. Therefore, we propose the following hypothesis. The mediating effect of intention, self-efficacy, and identity TPB proposed that individuals' behavior is affected by behavioral intention (BI), which in turn is the combined result of variables, such as personal behavior attitude (BA) and perceived behavioral control (PBC) (Armitage and Conner, 2001). Research shows that the green behavior intention will be influenced by green organization identity (Chen, 2011). Green organization identity refers to the individual's interpretive scheme on organization's environmental management and protection system which will impact the individual's behavior. Green organization Frontiers in Psychology 04 frontiersin.org identity is embodied in employees' identification with the green environmental protection system (IWTGPS), which reflects the employees' recognition of the enterprise's green strategy including its necessity, and effectiveness. Studies have shown green organization identity impacts individual's organizational citizenship behavior for the environment (Liu et al., 2021), sustainability exploration innovation (SER) (Xing et al., 2019), green innovation performance (Chang and Chen, 2013) and green creativity (Song and Yu, 2018). IWTGPS can promote employees to establish environmental awareness and green management, and behavior (Chang and Chen, 2013;Xing et al., 2019). Therefor, employees, with a high sense of identity with the enterprise's green environmental protection system, will have a high likelihood to conduct green innovative behavior from the view of TPB. (Gioia and Thomas, 1996;Chen, 2011;Chang and Chen, 2013;Song and Yu, 2018;Xing et al., 2019;Liu et al., 2021). Meanwhile, as a type of BI, an employee's IWTGPS will be affected by the employee's green environmental protection intention (BA) and environmental behavior self-efficacy (PBC) in the light of TPB. Green self-efficacy refers to the employees' belief about his competencies to engage and accomplish environment-related tasks (Chen et al., 2015;Faraz et al., 2021). Green self-efficacy affects employee's green behavior (Adnan, 2021), green creativity (Chen et al., 2015), and pro-environmental behavior (Faraz et al., 2021). Employees with high self-efficacy will exert more resource, time and commitment to works and tolerate failure (Bandura, 1997;Zhang et al., 2022) Thus, we propose that an employee's environmental protection intention and environmental behavior self-efficacy are positively related to his green innovative behavior, and the relationship will be mediated by IWTGPS. That is, only when individuals have the will for green innovation, they will continue to strengthen their willingness in the action, till the final green innovation behavior gets implemented. Furthermore, on one hand, perceived GHRM by employees can strengthen their attitude toward green environmental protection behavior and felt responsibility by conveying the organization's concern for corporate ES strategy and social responsibility. Which is consistent with the company's entire green environmental protection strategy (Lu et al., 2022). At the same time, perceived GHRM can enhance employee's organizational identification, which in turn leads to green behaviors (Chaudhary, 2020). On the other hand, perceived GHRM can help the employee develop conscious awareness and innovation ability when implementing environmental protection behaviors, and pave the way for employees to recognize the organizational green environmental protection system from the perspective of ability self-control and broadening (Zhou and Zhang, 2018). The generation of green environmental protection intention and the strengthening of self-efficacy of environmental protection behavior will be affected by perceived GHRM (Cherian and Jacob, 2012;Gill, 2012;Tang and Sun, 2021). In combination with H1, we propose the following hypothesis: H2a: Green environmental protection intention and green system identity are the chain mediators between perceived GHRM and employees' green innovative behavior. H2b: Environmental behavior self-efficacy and green system identity are the chain mediators between perceived GHRM and employees' green innovative behavior. 2.2.3. The moderating role of green supply chain management GSCM refers to the actions to reduce consumption of raw resources, waste in internal operational processes, and increase the use of recycled/recyclable materials in external operational processes (Sarkis, 2012;Gimenez and Sierra, 2013). GSCM reflects the enterprise's environmental awareness in the process of product development, purchasing, distribution, and reverse logistics (Chan et al., 2016). It is a kind of environmental strategy. (Chan et al., 2016;. Some researches show that GSCM mediate the relationship between GHRM and performance (Longoni et al., 2018). In contrast, some scholars found GHRM influence the implementation of GSCM process greatly (Kumar et al., 2019). Green supply chain management is a modern management mode that comprehensively considers the environmental impact and resource efficiency in the whole supply chain (Zheng and Xie, 2017) As a complex system to improve economic and environmental benefits, the green supply chain carries out unified organizational planning and coordinated management, which consists of environmentally purchasing materials, energy-saving design, reverse logistics, internal environmental management, cooperation with downstream buyers, and recycling As supply chain management involves various departments and jobs, it has become a research hotspot. In carrying out GSCM model, enterprise will train employees, acquire ISO 14001 certification, strengthen waste disposal . GSCM is an effective tool for environmental performance improvement (Chan et al., 2016;. Therefore, green supply chain management can strengthen employees' sense of identity with corporate environmental protection strategies, and ultimately promotes green innovative behaviors. It will strengthen the relationship between the perceived GHRM and employees' green innovative behavior. Thus, we propose the following hypothesis (Figure 1). H3: Green supply chain management positively moderates the relationship between green environmental protection identity and employees' green innovative behavior, that is, the higher the level of green supply chain management, the stronger the relationship between identity and green innovation behavior, and vice versa. Based on the above assumptions, the analysis framework is as follows. Both environmental protection intention and environmental behavior self-efficacy are the chain mediators between perceived GHRM and green innovative behavior through producing employee's Frontiers in Psychology 05 frontiersin.org sense of identify with company's green environmental protection system. And in this process, GSCM plays a moderating role, that is, the higher the level of GSCM, the stronger the relationship between employee's identity and green innovation behavior, and vice versa. Samples In this study, data were collected by questionnaire. The samples are mainly from Suzhou. We use on-site and online distribution methods to survey and distributed 260 questionnaires, including 214 paper questionnaires and 46 electronic questionnaires. After excluding 53 invalid questionnaires, 207 valid questionnaires were returned, which accounted for 79.62%. Respondents are from chemical, manufactory, pharmaceutical, and hotel sectors. Their jobs are mainly production, supply chain managers, technical workers, R&D and others. Under their consent, paper-pencil or online questionnaire was distributed. The data collection was organized in three stages. During stage 1, employees answered questions about perceived GHRM, green supply chain management, and demographics. During stage 2, about 1 month later, employees answered the questions about mediators, such as green environmental protection intention, green system identity, environmental behavior self-efficacy, and green system identity. During stage 3, about 1 month after stage 2, the employee answered the questions about green innovative behavior. Sample profiles are shown in Table 1. Measures The Likert-5 scale was used, ranging from 1 ("strongly disagree") to 5 ("strongly agree"). Perceived GHRM is adopted from Sun et al. (2007), including 6 items. The intention of green environmental protection involves 4 items and the self-efficacy of environmental behavior includes 3 items adopted from Cordano and Frieze (2002). The measurement of the green innovation behavior refers to the method of Ng and Lucianetti (2016), which contains 5 items. Employees' identification with the enterprise's green environmental protection system uses items from Mael and Ashforth (1992), including 3 items. Green supply chain management is adopted from the research of Jabbour and Jabbour (2016), with 5 items. Reliability and validity We use confirmatory factor analysis (CFA) to calculate the reliability and validity. The results are summed in Table 2. From Table 2, we can see the goodness of fit of the six-factor model is good, as follows: X2/DF = 1.872 < 3, RMSEA = 0.070 < 0.080, Research framework. It is significantly better than five-factor, three-factor, and singlefactor models (see Table 2). Each variable's reliability and validity value is as follows: Correlation coefficient The analysis of the correlation coefficient of each variable is shown in Table 3. The data preliminarily verify the hypothesis. It is shown that green innovative behavior is positively correlated with perceived green HRM, green environmental protection intention, environmental protection behavior self-efficacy, employees' recognition of the enterprise's green environmental protection system, and green supply chain management. The relationship between perceived GHRM and employees' green innovation behavior This study uses SPSS regression analysis to test the hypotheses, and the results are shown in Table 4. It can be seen from Model 1 that the regression coefficient of perceived GHRM on employees' green innovative behavior is 0.543 (p < 0.01), and H1 is supported. From model 2 and model 3, it can be seen that the coefficients of perceived GHRM on EI and SE are 0.416 (p < 0.01) and 0.532 (p < 0.01) respectively. From Model 4, we can see the coefficients of EI and SE on employees' intention 0.377 (p < 0.01) and 0.485 (p < 0.01) respectively. Model 5 shows the coefficient of intention on green innovative behavior is 0.641 (p < 0.01). It provides a preliminary test for H2. The mediating role between perceived green HRM and employees' green innovative behavior This study analyzes the mediating role of employees' identification with the enterprise's green environmental 0.185, 0.375]. This proves that the above two chain mediation paths are both valid, and the mediating effect of the latter is higher than that of the former. H2 is supported. The moderating role of green supply chain management From model 6, we can see the interaction coefficients of green supply chain management and intention on employee's green innovative behavior is 0.129, hypothesis H3 is verified. Further, we divide samples into two subgroups based on green supply chain management. The results show that when one standard deviation is subtracted, β is 0.358, and the confidence level is 95%. The interval is between 0.173 and 0.543; when one standard deviation is added, β is 0.411, and the confidence interval at the 95% level is between 0.234 and 0.588. From this, we can see that both results do not include 0 points, as shown in Figure 2. Therefore, the significance of the moderating effect of green supply chain management has been further verified, the interaction effects are as in Figure 2. Conclusion Based on the above analysis, the findings of this study are as follows. First, perceived GHRM has a significant positive impact on employees' green environmental intentions and environmental behavior self-efficacy. Green environmental intentions and environmental behavior self-efficacy also significantly and positively affect employees' green innovative behavior. Second, employees' identification with the company's green environmental protection system plays a significant mediating role in the transformation of green environmental protection intentions and environmental behavior self-efficacy into green innovation behaviors. The two intermediary influence paths (i.e., perceived GHRMgreen environmental protection intention -employee's identity with the enterprise's green environmental protection system -green innovation behavior, and perceived GHRMenvironmental protection behavior self-efficacy -employee's identity with the enterprise's green environmental protection system) are confirmed. Third, green supply chain management has a positive moderating effect on the mechanism of The moderating role of green supply chain management. Frontiers in Psychology 08 frontiersin.org employee's identity with the enterprise's green environmental protection system influencing employees' green innovative behavior. Theoretical and practical implications Theoretical implication is threefold. First, previous research adopted the macro perspective to examine the impact of GHRM on green behavior (Rongbin et al., 2022;Zhang et al., 2022). However, a few studies adopted the micro perspective to analyze the influence. Our research applies TPB to examine the impact of perceived GHRM on employees' innovative behavior which is more critical for enterprise sustainable than green behavior at the individual level. Our study shows that perceived GHRM affects the employees' identity with the company's green environmental protection system via employees' green environmental protection intention and environmental protection behavior self-efficacy. The results precisely clarify the chain mediation linkage between perceived GHRM and employees' green innovation behavior and deepen our understand of the black box between GHRM and enterprise environmental performance. Second, scholars (Sanders and Yang, 2016) highlight that the process of HRM may be more crucial than the content of HRM. The perceived GHRM is a paradigm of process HRM. Therefore, the conclusions of this study prove the core views and the validity of process HRM. Third, previous studies (Nejati et al., 2017;Saeed et al., 2022) have explored the relationship between GHRM and GSCM, and many of them argued that GHRM is the driver of GSCM. Nevertheless, our research show that GSCM, as a kind of enterprise's environmental strategy, moderates the relationship between GHRM and employees' innovative behavior. The contingency theory of strategic HRM points out that HRM can play a better role only when it matches with other management practices. Further on this basis, our research reveals that the interactions GHRM and GSCM impact the employees' green innovative behavior and deepen the understanding of the role of supply chain management in the contingency theory of strategic human resource management. Practical implications are as follows. First, HR department should guide employees consciously to learn about green behaviors through professional training and green knowledge sharing (Islam et al., 2021a;Ahmed et al., 2022), thereby enhancing their psychological sense of self-control for green innovation behaviors, so as to independently carry out green innovation behaviors. Second, enterprise ought to set up a position dedicated to the construction of environmental protection culture, responsible for coordinating the construction of corporate green culture, to push the company's green environmental protection culture closer to the employee, and to arouse individual resonance to integrate into his work. Third, enterprise should take a holistic view to play the role of GHRM and GSCM. Enterprise need to keep the match between GHRM and GSCM, try to utilize information and communication technology in GSCM (Batool et al., 2019), and train employees in the green procurement, production and innovation in the process of GSCM. Limitations and future research While the study tests the hypothesis, there also are some limitations. First, we collect GSCM data from the employee, not from managers. It may lead to measurement bias. In the future, we can collect data from multisource to conduct an integrated macro and micro level, and provide a comprehensive framework to discuss the interaction effects of perceived GHRM and GSCM. Second, perceived GHRM originates the paradigm of process HRM, which stresses that the strength as well as attribution style is critical in the prediction of employees' behavior. These variables are not incorporated in the study. It provides another future research direction. Data availability statement The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.
2023-01-24T17:36:33.358Z
2023-01-16T00:00:00.000
{ "year": 2023, "sha1": "b1a7397dc16c47155a3dd02c86b812ecb34585b6", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2022.1106494/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "cdac9885d4c0071a7074ec720b03b6f271266059", "s2fieldsofstudy": [ "Environmental Science", "Business" ], "extfieldsofstudy": [ "Medicine" ] }
26584192
pes2o/s2orc
v3-fos-license
N-(2-Bromobenzyl)cinchoninium bromide The title compound {systematic name: 1-(2-bromobenzyl)-5-ethenyl-2-[hydroxy(quinolin-4-yl)methyl]-1-azabicyclo[2.2.2]octan-1-ium bromide}, C26H28BrN2O+·Br−, is a chiral quaternary ammonium salt of one of the Cinchona alkaloids. The planes of the quinoline and of the bromobenzyl substituent are inclined to one another by 9.11 (9)°. A weak intramolecular C—H⋯O hydrogen bond occurs. The crystal structure features strong O—H⋯Br hydrogen bonds and weak C—H⋯Br interactions. The oxygen atom (O12), is an acceptor in weak intramolecular hydrogen bonds. The hydrogen bond geometry is given in Table 1. The disorder of the vinyl groups occurs in almost every molecular structure of Cinchona alkaloids, we have determined. The vinyl group (i.e. C10 and C11 atoms) is present on the periphery of the whole molecule, so it has ability to move. The conformation of the vinyl moiety, which we present here, is close to the potential energy minimum and is frequently observed in the structures of erythro Cinchona alkaloids. Experimental A mixture of cinchonine (2.95 g, 0.01 mol) and 2-bromobenzylbromide (2.5 g, 0.01 mol) in toluene (40 ml) was stirred and heated at 353 K for 4 h. After cooling to room temperature, hexane (100 ml) was added and the mixture was stirred for 10 h. The precipitated crystals were collected by suction filtration, washed with acetonitrile and dried to give N-(2bromobenzyl)cinchoninium bromide (5.25 g, 97%, m.p. 430 K). Single crystals suitable for X-ray diffraction study were obtained from ethanol by slow evaporation at room temperature. Refinement All hydrogen atoms were found on a difference Fourier maps and refined using a riding model with C-H = 0.93Å and U iso (H) = 1.2U eq (C) for aromatic hydrogen atoms, C-H = 0.97Å and U iso (H) = 1.2U eq (C) for methylene groups and C-H = 0.98Å and U iso (H) = 1.2U eq (C) for methine groups. The O based atom H12 was refined with U iso (H) = 1.2U eq (O). Figure 1 The asymmetric unit of the title compound with the atom numbering scheme. Displacement ellipsoids are drawn at the Special details Geometry. All s.u.'s (except the s.u. in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell s.u.'s are taken into account individually in the estimation of s.u.'s in distances, angles and torsion angles; correlations between s.u.'s in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell s.u.'s is used for estimating s.u.'s involving l.s. planes.
2016-05-12T22:15:10.714Z
2012-08-31T00:00:00.000
{ "year": 2012, "sha1": "481aabc578978a46890288776827135d5a245404", "oa_license": "CCBY", "oa_url": "http://journals.iucr.org/e/issues/2012/09/00/rk2374/rk2374.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ad9e9d6cd07783bfe86c15fa03cb03389c113c3a", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
269986363
pes2o/s2orc
v3-fos-license
Palliative care education: a nationwide qualitative study of emergency medicine residency program directors in the United Arab Emirates Background Emergency medicine (EM) physicians routinely care for patients with serious life-limiting illnesses. Educating EM residents to have general skills and competencies in palliative medicine is a global priority. The purpose of this study was to describe the current status of palliative and end-of-life education in EM residency programs in the United Arab Emirates (UAE) and to identify barriers and opportunities to inculcating palliative care (PC) instruction into EM training in a non-Western setting. Methods Using the American College of Emergency Medicine’s milestones for Hospice and Palliative Medicine for Emergency Medicine as a question guide, semi-structured interviews were conducted with program directors of all 7 EM residency programs in the UAE from January through July 2023. Qualitative content analysis was conducted to identify recurring themes. Results All program directors agreed that PC knowledge and skills are essential components of training for EM residents but have had variable success in implementing a structured PC curriculum. Six themes emerged, namely the educational curriculum, PC policies and practices, comprehensive PC services, cultural and religious barriers to PC, EM scope of practice, and supporting residents after patient death. Conclusion UAE national EM residency curriculum development is evolving with an emphasis on developing a structured PC curriculum. As EM residencies implement policies and programs to improve care for patients and families dealing with serious illness, future studies are needed to assess the impact of these initiatives on patient quality of life and physician well-being. Supplementary Information The online version contains supplementary material available at 10.1186/s12245-024-00643-z. Background Emergency medicine (EM) was recognized as a specialty in the United Arab Emirates (UAE) in 2000.In 2007, the country's first EM residency training program was established [1].The specialty was initially designed to focus on resuscitation, stabilization, and early management of acute disease processes.Over the past two decades, patient demographics in the country's emergency departments (ED) have followed global patterns as the UAE experiences population aging and an increased prevalence of cancer and chronic, non-communicable diseases [2].Currently, EM physicians in the UAE routinely care for patients with end-stage disease. In many Western countries, palliative care (PC) services have developed in parallel with the aging population.PC is a specialized care approach that employs early detection and symptom relief to improve the quality of life for patients and families dealing with serious illness [3].Initiating PC services in the ED aligns treatment with patient and family preferences and has been shown to have several benefits [4].For patients, the early identification of PC needs decreases physical and psychological distress and improves symptom management and quality of life [5].For family members and caregivers, early referral to PC facilitates shared decision-making and eases bereavement adjustment [6].For institutions, inpatient PC consultations within 24 h are associated with fewer intensive care admissions, decreased in-hospital mortality, shorter inpatient length of hospitalization, and reduced healthcare expenditures [4,7]. Studies in the United States (US) and Canada suggest that barriers to providing PC in the ED setting include time pressures, lack of access to medical records, uncertainty about the diagnosis and prognosis, and lack of prior patient-physician relationships [8,9].Religious and cultural values and preferences can also influence health communication and care provision, particularly in serious illness [10].In many collectivist cultures, families believe that talking openly about death can lead to loss of hope and accelerate the dying process [11], though recent studies show that family-patient wishes are often discordant and patients prefer to be informed of their diagnosis and treatment options [12,13].Formal education in PC has been identified as a key solution to overcoming the barriers to delivering PC in the ED [14].Skills gained during residency training impact physicians' practice throughout their careers [15].Research shows that ED physicians who are trained to prioritize resuscitative and life-saving care often do not consider PC provision within their scope of practice [16].Therefore, educating EM residents to have general skills and competencies in palliative medicine is a global priority.In this manuscript, we report the findings from a national study of EM Residency Program Directors (PD) in the UAE.Our objectives were to describe the current status of PC and end-of-life education in the country's EM residency programs and to identify barriers and opportunities for inculcating PC instruction into EM training. Methods Using the COnsolidated criteria for REporting Qualitative research (COREQ) for collecting and reporting our data [17] and the American College of Emergency Medicine (ACEP)'s milestones for Hospice and Palliative Medicine for Emergency Medicine (HPM-EM) [18] to guide our questionnaire and analysis, we conducted semi-structured interviews with all EM PDs in the UAE between January and July 2023. Setting and participants The UAE is a multi-ethnic and multi-cultural country in the Middle East.Over the past several decades, the country has made substantial investments in healthcare and education, with the development of academic medical centers and international accreditation of many institutions and residency programs [19,20].Palliative medicine is a small but growing discipline in the country, with PC facilities and hospices currently in development alongside comprehensive cancer centers in several hospitals.Recent studies of UAE medical schools and internal medicine residency programs reveal limited formal PC education but great interest among medical educators in expanding PC training [19].To date, there is little systematic information available on PC training or care delivery in EM. EM residency training in the UAE is structured competency-based clinical education of four years duration.Each residency program is led by a single program director (PD).The PD is a senior physician educator who is responsible for all aspects of the residency program, including curriculum development, policy implementation, and program administration and oversees the teaching, supervision, and assessment of the trainees.We conducted a purposive sampling of PDs of all Arab Board-accredited EM residency programs in the country. Interview guide At the time of this study, there was no consensus on PC or end-of-life competencies for EM residents in the UAE.The conceptual underpinnings for this study were based on the ACEP's HPM-EM curriculum [18].Developed by an expert, multidisciplinary panel, this curriculum compiles a list of core palliative and end-of-life care domains in EM, covering topics such as communication skills and ethical considerations, pain management, and interprofessional collaboration [18].Based on the ACEP's HPM-EM framework, one of the authors (TH) drafted the initial semi-structured interview guide (Appendix E1).The questions aimed to understand the depth and breadth of palliative and end-of-life care education in the residency program, EM resident clinical exposure to patients with PC needs, and their competence in communicating with and caring for patients and families dealing with serious illness.Questions included basic demographic information about faculty, trainees, and rotations and open-ended questions about the content of PC education, teaching methods, assessments, and any planned curricular or policy changes in PC education.We also sought to identify potential challenges the PDs faced in teaching palliative medicine to EM residents.Five emergency medicine physicians who are involved in resident education in the US (n = 2) and the UAE (n = 3) piloted the questionnaire for length and clarity.Minor contextual changes were made based on their input.These physicians did not participate in the final interviews.We performed data collection and analysis concurrently and iteratively adjusted the interview guide as new information arose. Data collection All PDs from the seven Arab Board-accredited UAE residency programs were identified through institutional websites or personal contacts.The participants were first informed about the study purpose and protocols via email, and when they agreed to participate, a virtual interview was scheduled in advance so that they could be conducted privately and with minimal disruptions.Based on the concept of information power [20] and the highquality interviews (conducted by two interviewers with content expertise and experience in qualitative interviewing), we believe that the seven PD interviews were sufficient to answer our research questions.Interviews were conducted between January and July 2023 in English and lasted approximately 30-40 min each.They were audio recorded with participant consent, transcribed verbatim, and checked for accuracy.No additional notes were taken.Participants were not identified by name in the audio recordings or transcriptions and, other than basic demographic information, findings were not linked to individual programs.The study was approved by the Khalifa University Research Ethics Committee [H22-022].Individual written informed consent was obtained from all participants.Participation was voluntary and no incentives were offered. Data analysis We performed all data management, coding, and analysis manually.Two of the authors (HI, TH) independently completed a line-by-line review of each transcript to generate initial codes.We then conducted thematic content analysis to find recurring concepts that were noteworthy or important to the questions we were trying to answer [21].Following the process of qualitative data analysis, we engaged in iterative and cyclic constant comparison analysis to group these concepts into themes [22].Through in-depth conversations with the entire research team, we reached consensus on a coding scheme that was applied to all transcripts.To enhance trustworthiness of the data, an audit trail [23] was maintained through member checks with interested participants and a checklist of data entry and analysis with all authors. Team characteristics and reflexivity Our diverse team consists of clinician educators involved in both undergraduate and postgraduate medical education in the UAE (HI, RB, TH) and a research associate (LOA).HI and TH are internal medicine physicians with formal training in medical education; TH has advanced training in PC.RB is an emergency medicine physician.All three physicians completed residency training in Western countries (US-HI, RB, Canada-TH) and served as PDs in UAE residency programs.To minimize bias, we were blinded to participant identities during data analysis.We were mindful of how our academic backgrounds and experiences influenced our analysis of the data and engaged in frequent group conversations to discuss and challenge each other's interpretations. Results All of the PDs from the seven EM residency programs in the UAE participated in this study.Table 1 lists program demographics.All PDs acknowledged that EM residents routinely care for patients and families who would benefit from PC services and agreed that training in PC is essential in EM residency programs.One PD explained: I think end-of-life care and palliative care in emergency departments are more common than people think. And I think that's the mindset that I'd be really glad to see change. [PD4] Six themes emerged from the interviews, namely the educational curriculum, PC policies and practices, comprehensive palliative care services, cultural and religious barriers to PC, EM scope of practice, and supporting residents after patient death.The themes are discussed below with quotes from the PDs to evidence our findings.The barriers to delivering PC care in EM are summarized in Fig. 1. Theme 1. Educational curriculum Despite recognition of its importance, PC education was not a formal component of the EM curriculum in any of the programs and none offered a mandatory rotation in PC.Instead, PC topics were primarily integrated into educational sessions on oncologic emergencies and pain management.Content was often delivered by lecture and case-based learning during academic days.Simulation sessions focused on acute care management topics, with infrequent coverage of serious illness conversations or death notifications.The programs do not routinely integrate other professions, such as nurses, social workers, or faith-based leaders, into the teaching.None of the PDs reported regular assessment of PC knowledge or skills.The PDs explained: It's not a mandatory part of our syllabus or in our curriculum.I think it's more prevalent in the North American curriculum more than the ones in the region.So, in our Arab Board, it's not there; it's not mentioned either as a topic to be discussed.But we are sometimes discussing it didactically through lectures.We'll go through the related chapters in our books and then we'll speak about it.We are also doing some case-based discussions.But is Theme 2. PC policies and practices The PDs reported that resident education is impacted by institutional policies and practices related to death and dying.Several PDs cited the lack of awareness of hospital policies regarding non-escalation of care to be a barrier to the implementation of PC education.Others noted the logistics of initiating do-not-resuscitate (DNR) orders, which require the agreement and signatures of three consultant physicians.The PDs explained: Until recently, we didn't have DNR orders in place.It's only maybe over the last 12 to 14 months, maybe a bit longer, that we've been allowed to do it.So, quite often, when somebody comes with probably end-of-life care, we would speak to the critical care consultant and he would initiate the DNR order… and then we'll put a plan in place.And then if pain relief is required, pain relief will be given in the ER and then he will go to a ward.And then once he goes to the ward and the palliative team will pick him up and whether he needs something for the secretions or whatever, all of that will be taken care of. Theme 5. EM scope of practice Only one of the PDs felt that PC was not part of the ED's physician's mission and should be deferred to other specialties: I think the right person to have end-of-life conversations would be the specialist, like the oncologist, because the emergency physician might not have all the information and also, they might not be the right person to be put into the situation. The family keep asking questions outside of the emergency physician's expertise. [PD 6] Overall, the PDs acknowledged the importance of initiating goals-of-care conversations and educating their trainees to do so: There's that I want to do everything because I can rather than it is actually the right thing to do.And I think that's something certainly we've had conversations with our residents and we discussed cases during teaching where we bring these sort of matters up and say you know, it is important from a patient experience for the emergency department to set the right tone.Then that becomes easy for the admitting specialties.And I think that goes a long way to that patient having a good outcome. [PD 4] Another PD concurred: Making resuscitative decisions was very difficult.But now in the last two years, we have advanced in this respect and discussing with the families and involving our junior residents most of the time.For example, for people who would not get benefit out of intensive care unit (ICU) treatment and the care is gonna be futile or somebody is post-cardiac arrest or is a sick 90-year-old person.When to make a decision not to do further resuscitation and discuss with family about futility of care-so that's been part of our teaching program.[PD 5] Theme 6. Supporting residents after patient death The PDs all admitted that patient death in the ED was a source of distress for their trainees.Although most programs lacked formal processes for psychological support, many mechanisms were in place to help trainees cope with the emotions inherent in dealing with death and dying.The most common support was a "hot debrief, " which often occurred within hours of the event.Debriefs focused primarily on clinical reviews of the case but also allowed participants to express emotions and receive support.Other mechanisms included a well-being curriculum with topics on self-care, peer mentoring, an open-door policy of PD and core faculty for informal discussions and support, and reflective practice.The PDs described the support mechanisms available: Yes, but it's not a formalized process.I mean, when we have trainees that have had these situations happen, it gets escalated to somebody in our core faculty, if they weren't already involved.And then we individually reach out to them [the resident] and follow through, but not in a formalized process… But we don't have a mechanism for formally addressing that outside of a debrief.It's just, we heard XY and Z happened with a certain resident.Let's follow up with them rather than some sort of mechanism or trigger to capture all of those cases. [PD 3] As part of the residency, we're pushing the notion of reflective practice as well.So, as part of our case-based discussions, the individual that may present will certainly try and encourage them to reflect on it as well….So yeah, it's something we're trying to embed as a process to become used to doing because it is a challenge… We also set aside time during our teaching every week.We talk about resilience, we talk about coping strategies.We talk about, you know, how people are feeling.So yeah, we're very much in tune with that and I think we're better at that post-code as well.[ PD 4] Within that shift, we'll do a de-brief.We'll talk about it.We'll make sure they understand that they've done everything they need to do, and if they need to be released for the day, they can go and then I will follow up with them as a program director.[PD 7] Discussion This study provides a snapshot of PC education in EM residency training programs in the UAE.The study adds to the literature by providing the perceptions and intentions of EM PDs and identifying barriers they faced in providing PC education.Overall, EM PDs agreed that proficiency in palliative and end-of-life care was essential but the lack of a formal curriculum, inconsistencies in the awareness and implementation of institutional policies, the lack of comprehensive and multidisciplinary PC services, and cultural and religious factors served as barriers to initiating PC services and teaching PC competencies in the ED.Our findings are consistent with studies in multiple countries worldwide reporting insufficient PC training in EM residency curricula [24,25].In a survey of over 100 EM PDs in the US, just over half of the programs included PC training [26].A national study of the Canadian College of Family Physicians Emergency Medicine (CCFP(EM)) and the Royal College of Physicians and Surgeons of Canada Emergency Medicine (RCPSC-EM) postgraduate training programs showed that only 38.5% of responding programs had a structured curriculum in palliative and end-of-life care and all education was lecture or seminar-based [27]. The PDs in our study all planned to implement additional PC training sessions.Studies show that PC education should not be limited to didactics.Effective educational modalities include bedside teaching, small group sessions, role-playing, and simulation [28][29][30].Research reveals that EM residents desire further training in PC and this training improves comfort and confidence in managing end-of-life patients [14].For example, EM clinicians in Australia who participated in PC training felt comfortable managing people with advanced cancer presenting to the ED [31].Even brief interventions have been shown to improve PC attitudes and skills.In one study, a 4-hour educational session improved EM resident comfort in discussing end-of-life care and knowledge of PC concepts, which was maintained at 6 months [14]. Based on our findings, we believe that a structured PC curriculum, with both didactic and clinical components, should be a mandatory part of all EM residency programs in the UAE.Research supports longitudinal teaching throughout the continuum of medical training as the most effective way to improve trainee PC knowledge and skills [32].Given the strong interest in incorporating PC education despite the lack of structured curricula, we believe that a national teaching framework for culturally competent and locally relevant PC should be adopted by the nation's EM residency programs to standardize exposure and learning.This will require the recruitment of multidisciplinary PC specialists, including PC nurses, social workers, and faith-based professionals, who can develop and deliver the curriculum and role model the care.Studies show that EOL teaching by PC specialists improves trainee self-efficacy in PC [33].Faculty development is also necessary.Training programs in goalsof-care communication skills, pain management, and end-of-life symptom management should be available for EM healthcare professionals at all levels to improve their knowledge and skills of core PC principles. Our findings also have policy implications.Table 2 shows the implications for curricular and policy reform.We are encouraged that several of the PDs routinely initiated PC communication and care within the ED.The ED sets the stage for future inpatient care and determines disposition to the intensive care unit.Early goals-of-care discussions can help tailor treatment plans that are concordant with patient and family preferences.However, initiating end-of-life conversations can be daunting for physicians.Several studies have shown the utility of conversation guides to facilitate clinician-led advance care planning and other electronic resources that are compatible with hospital electronic medical records to support shared decision-making [34].These resources can be implemented in UAE EDs to facilitate the early provision of PC services.EM, Emergency Medicine; PC, palliative care; EOL, End-of-life; HPM, Hospice and Palliative Medicine; DNR, Do not resuscitate.Some of the ED clinicians had misconceptions about PC and cultural and religious objections to initiating PC services.Prevailing medical teacher beliefs and cultural views about PC can negatively impact resident education [35].To fully integrate PC into EDs in the UAE, awareness campaigns on UAE policies and laws regarding DNR are needed to clarify confusion and better inform physicians.Train-the-trainer sessions focusing on the benefits of early PC referrals and the use of opiates in end-of-life pain management can further promote the implementation of PC in the country's EDs. A notable strength of the UAE EM residency programs is the focus on resident well-being and support.Residents in programs worldwide are uncomfortable and feel ill-prepared to deal with dying patients and their families [36][37][38].Moreover, residents can experience significant grief after a patient's death [37,39].Most of the programs conducted "hot debriefs" shortly after a death in the ED.Studies show that real-time debriefing and supportive discussions can be effective in addressing resident emotions after a patient's death [40].The programs also offered peer mentorship and faculty support.Support is an important resource in reducing moral distress and burnout in healthcare professionals [41].Workshops in peer debriefing should be considered as an additional mechanism to provide trainees with the skills and tools to support their colleagues in the immediate aftermath of a patient's death [42]. Limitations Our study has several limitations.Although a small number of PDs were interviewed, they represented all of the accredited EM residency programs in the UAE at the time of the study.We believe our findings represent the current status of EM PC education.The PDs may have presented their programs in a favorable light, thereby, overestimating the depth and breadth of PC training.We only report the presence of PC training but are unable to assess the quality of the education.Finally, the resident, patient, and family perspectives are missing.Future studies are needed to assess the impact of PC training on EM residents and their competence and skills in palliative and end-of-life care. Conclusion A PC approach provides patients and their families an improved quality of life throughout the duration of an illness and at the end of life [3] and should be provided to hospitalized patients dealing with serious illnesses by all health professionals.There is consensus that proficiency in palliative and end-of-life care is essential but remains a global challenge for physicians.EM residents are a particularly critical group when it comes to PC training.UAE national EM residency curriculum development continues to evolve with major emphases on improving serious illness communication skills and the development of structured PC curricula.As EM programs implement policies and programs to improve communication and care for patients and families dealing with serious illness, future studies are needed to assess the impact of these initiatives on patient quality of life and physician well-being. Table 1 Characteristics of Emergency Medicine Residency Programs in the United Arab Emirates Barriers to implementing a palliative care curriculum in emergency medicine residency programs in the United Arab Emirates.aasidentified by UAE Emergency Medicine Residency Program Directors.PC, palliative care; UAE, United Arab Emiratesit a structured one?I will say no.Do we cover PC in our academic day?Yes, we have it as one or two sessions related to the chapters in the book.[PD2] ACGME-I, Accreditation Council for Graduate Medical Education InternationalFig. 1 Cultural and religious barriers to PC More importantly, the PDs felt that cultural and religious objections of the healthcare team were barriers to providing effective palliative and end-of-life care.One PD described the reluctance of the healthcare team to initiate PC services: weird sort of situation where the admitting medical team and the specialists and the juniors from the medical team completely agree with the emergency team about it.But legally we can't do it because we can't get the signatures.[PD4]Emergencydepartmentshavealwaysbeen the sort of safety net where patients with PC needs end up and when families are struggling.So, we often see patients where the community setting hasn't really been able to provide the care that they've required and as a result, they've ended up in the emergency room.[PD4]AnotherPDadded:In the UK, if someone has terminal cancer, there's a lot of support.There are specialist nurses who go home to review them and there's a good primary care setup.So, those nurses and GPs will go check on these patients and make sure their pain is controlled and they die with dignity at home.We don't have those yet in the UAE.So those are a bit of a challenge.[PD 6] Theme 4. ably think that the clinician doesn't feel very comfortable about, but we're not really in a position where we can do otherwise.[PD 1]
2024-05-25T06:17:23.512Z
2024-05-23T00:00:00.000
{ "year": 2024, "sha1": "668005831b0a11fa654dd6a5e397f2b256697547", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "85eaf47a6c2f0753651d5cdb81c991d75ad6a3d9", "s2fieldsofstudy": [ "Medicine", "Education" ], "extfieldsofstudy": [ "Medicine" ] }
119196676
pes2o/s2orc
v3-fos-license
Photoemission electron microscopy of localized surface plasmons in silver nanostructures at telecommunication wavelengths We image the field enhancement at Ag nanostructures using femtosecond laser pulses with a center wavelength of 1.55 micrometer. Imaging is based on non-linear photoemission observed in a photoemission electron microscope (PEEM). The images are directly compared to ultra violet PEEM and scanning electron microscopy (SEM) imaging of the same structures. Further, we have carried out atomic scale scanning tunneling microscopy (STM) on the same type of Ag nanostructures and on the Au substrate. Measuring the photoelectron spectrum from individual Ag particles shows a larger contribution from higher order photoemission process above the work function threshold than would be predicted by a fully perturbative model, consistent with recent results using shorter wavelengths. Investigating a wide selection of both Ag nanoparticles and nanowires, field enhancement is observed from 30% of the Ag nanoparticles and from none of the nanowires. No laser-induced damage is observed of the nanostructures neither during the PEEM experiments nor in subsequent SEM analysis. By direct comparison of SEM and PEEM images of the same nanostructures, we can conclude that the field enhancement is independent of the average nanostructure size and shape. Instead, we propose that the variations in observed field enhancement could originate from the wedge interface between the substrate and particles electrically connected to the substrate. three depending on the laser wavelength and the surface work function, and the photoelectron spectrum drops off exponentially with increasing energy. Extension of laser-based PEEM imaging to lower photon energies allows near-field imaging of light-nanostructure interactions in the technologically important telecommunication wavelength region. Field enhancement effects at these wavelengths can be used to boost optical communication or enhance conversion to electrical signals [17,18]. Indeed, wedges and edges as a sources of plasmon concentration for propagation at these wavelengths have been studied thoroughly in recent years [15,19]. In theoretical models, strong subwavelength field enhancements are observed indicating the relevance of imaging on a spatial scale far below the free-space wavelength of the light. Many Ag and Au nanostructures, which are the common model structures for plasmonic studies, have localized plasmonic resonances at these wavelengths with field concentration at wedges or crescents [20][21][22][23]. Using 0.8 eV radiation for photoemission experiments presents several new opportunities and challenges. Reported work functions for Ag are in the 4.1-4.7 eV region, meaning that the photon energy is below one fifth of the work function of the material. This further implies that the intensities required for imaging are likely to be beyond the perturbative regime of direct n-photon photoemission via virtual states. Other mechanisms such as field emission [13], thermal emission [24], and defect-mediated emission [25], as well as combinations of the above [26], have been proposed to generate electron emission by optical pulses. Regardless of the exact mechanism involved, the laser-induced emission of an electron from Ag will due to energy conservation require at least six 0.8 eV photons, and the photoelectron yield is expected to depend non-linearly on the local near-field intensity. Going to longer wavelengths and thus lower photon energies Journal of Applied Physics 117, 083104 (2015). DOI: 10.1063/1.4913310 4 generally reduce the electron emission probability and at some point the loss of photon energy to phonons compared to electron emission will result in a situation where the nanostructures will start to melt or ablate before any electron emission is observed. This is not an unreasonable concern for the silver structures investigated in the current study. This type of nanostructures has been used in a large number of plasmonic studies [27,28] and are highly relevant also for imaging of attosecond phenomena using PEEM [29,30]. However, they are known to in some instance be melting even at very moderate temperatures or in the presence of plasmonic hot spots [31]. In this work, we have performed PEEM using femtosecond laser pulses at the technologically important wavelength of 1550 nm. The samples consist of a variety of rationally synthesized Ag nanostructures deposited on a Au film, giving rise to localized surface plasmon resonances across a wide spectrum. The nonlinear photoemission process calls for a high-intensity light field. By using an optical parametric amplifier together with a high power laser system, pulse energies of around 0.7 mJ are possible with pulse lengths of 30 fs. In the current experiments, the laser power was tuned down in order to avoid space charge effects. One important question is whether the high intensities needed for imaging would exceed the damage threshold of the delicate nanostructures. Further, the extreme sensitivity of the non-linear photoemission process might result in so strong fluctuations in the number of emitted electrons that imaging becomes practically impossible. Our experiments show that PEEM imaging is still possible, despite these concerns. II. EXPERIMENT Journal of Applied Physics 117, 083104 (2015). DOI: 10.1063/1.4913310 5 The laser system used in the experiments is based on a Ti:Sapphire regenerative amplifier system delivering 20 fs pulses with up to 5 mJ energy and 800 nm center wavelength at a repetition rate of 1 kHz. The pulses are sent into a high-energy TOPAS-Prime (Travelling-wave Optical Parametric Amplifier of Superfluorescence) from Light Conversion Ltd for tunable frequency conversion over a wide range of output frequencies. The TOPAS consists of a sapphire plate used for generation of white light seed, followed by three amplification stages where a selected part of the white light spectrum is amplified through optical parametric amplification in BBO crystals. The output is two linearly polarized IR pulses with wavelengths that can be tuned from 1160 to 1600 nm and from 1600 to 2600 nm respectively, a duration around 30 fs, and a total converted energy up to 1.7 mJ per pulse. A typical spectrum of the TOPAS output is shown in Fig 1b. A 1 m focal length lens loosely focuses the beam onto the sample at an angle of 65 degrees with respect to the normal. In the experiments reported in this paper, we estimate the peak intensity incident on the sample to be on the order of 5*10 9 W/cm 2 . Compared to previous PEEM studies using 800 nm light, this intensity is in the high part of the reported ranges [11][12][13], which is expected due to the lower photon energy. The laser beam is s-polarized, i.e. the electric field vector lies in the sample plane. The PEEM is a commercial instrument from Focus GmbH, located in an ultra-high vacuum chamber. It accelerates the photoelectrons using a 10-15 kV voltage and forms an image of the photoelectrons using an electrostatic lens system. The PEEM is equipped with a high-pass imaging energy filter (IEF) for spectroscopic analysis. The experimental setup is schematized in Fig. 1a. For all images used in the analysis below, the laser intensity is tuned to a level where no significant space charge effects are observed. The instrument is also equipped with a Hg discharge lamp for UV-PEEM using continuous-wave illumination at 4.9 eV. Journal of Applied Physics 117, 083104 (2015). DOI: 10.1063/1.4913310 6 The sample is made from colloidal Ag nanowires and nanoparticles made using a polyol process, and dispersed in ethanol solution [32,33]. In the polyol process, poly-(vinyl pyrrolidone) (PVP) preferably attaches to the Ag(001) facets of nanocrystals, thus favors one-dimensional growth. A droplet of the solution is placed on a 50 nm thick Au film on Si and blow-dried after 30 s. The resulting sample has a mixture of Ag nanowires with diameters of around 150 nm and lengths of a few tens of microns, and Ag nanoparticles with a variety of shapes, with average sizes of 100-150 nm. Recorded spectrum of the output of the TOPAS when operated at 1550 nm. The spectrum shows that the combination of optical parametric amplification and spectral filtering by the dichroic beamsplitter gives a nicely bell-shaped spectrum centered around 1550 nm. III. RESULTS AND DISCUSSION An SEM image of part of a typical sample is shown in Fig. 2a [34][35][36]. On the nanoparticles though, the STM investigations indicate a clean, metallic surface. From the synthesis mechanism, it is known that the (111) surfaces of the nanoparticle have a much lower affinity for PVP than the (001) surfaces of the nanowires, which is in good agreement with our STM measurements. Thus we can conclude that some of the Ag particles with low index facets will likely be in direct contact with the Au substrate, while the nanowires could have a PVP layer in between them and the Au surface. Typical PEEM results for this system are displayed in Fig. 3. In Fig 3a we show an overlay image of the signal recorded with the 1550 nm laser as excitation source (in red), with a UV-PEEM image of the Au film with Ag nanowires and nanoparticles (blue). This allows us to identify each nanostructure in the image, to later correlate with scanning electron microscopy (SEM) images of the very same areas, which is seen in Fig. 3b. It can immediately be observed that no photoemission from the Ag nanowires can be detected under these conditions, as opposed to many of the Ag nanoparticles. Two further observations can also be made: First, all bright spots in the PEEM image can be correlated with a Ag nanoparticle or an assembly of particles. We also observe a number of Ag particles that do not emit electrons at a detectable level. Looking at the overview images such as Fig. 3 we can conclude that roughly 30% of the nanoparticles appear in the PEEM images, meaning in practice that they enhance the field to a similar level. Due to the non-linear response, very small changes in the field can be observed. For example, in the perturbative regime the 6-photon photoemission yield would scale as the 6 th power of the nearfield intensitythus a 1% increase in the field amplitude will correspond to a 13% increase in the photoemission yield. Even if the photoemission process in our experiments is not fully described by perturbation theory, as will be discussed later, we can conclude that at least 6 photons (4.8 eV) will be needed for each photoelectron, as reported workfunctions of Ag are in the range of 4.1-4.7 eV and the workfunction of Au is around 5.1 eV. We can thus expect the photoemission yield to depend very sensitively on the local field enhancement. Comparing the PEEM images recorded by the UV light, 1550 nm PEEM images and SEM images from the same areas we are able to further investigate which particles that give a detectable electron emission when illuminated by the laser pulses. The SEM images show that there are no clear differences between particles that respond to the light and those that do not. In both cases, particles range in diameter from 80 nm to 200 nm. For some of the larger particles, an asymmetry can be seen in the PEEM images indicating that the photoemission is not homogeneously distributed across the particle. No degradation of the particles is seen to occur in the PEEM during the experiments, which is also confirmed in the subsequent SEM analysis. The localized laser-induced electron emission from only a fraction of the particles and no indication of heating or ablation effects agree well with local plasmonic field enhancements expected to be found in the Ag-Au materials system. To further study the response we performed energy filtered PEEM imaging. In this way, we can compare the shape of the electron spectrum with the total number of emitted electrons for each nanoparticle. The spectra in general show two regions that can be approximated with straight lines in a semi-logarithmic plot: a plateau region with a small slope, followed by a cut-off region with a steeper slope. The width of this plateau region varies among the particles, but is on the order of 1-3 eV, corresponding to above-threshold ionization [37] by at least three photons. Fig. 4 shows a region of the sample imaged with Hg lamp (a) and with 1550 nm laser pulses (b). The photoelectron spectra are plotted on a semi-logarithmic scale in Fig. 4 (c) for 4 different spots from (b), and in each spectrum the two regions can be identified. We note that the two most intense spots, marked 2 and 3 in the figure, are also the ones with the widest plateau in the photoemission. The shape of the spectra corresponds well with studies of laser-assisted photoemission at moderate intensities such as the one by Aeschlimann et al. [38] and work by Schertz et al [13]. The flat part of the spectrum indicates that the photoemission process is not fully perturbative, since this would have yielded a spectrum that was rapidly decreasing even at low energies, and that would show kinks separated by the photon energy (see e.g. [39]). Another possible source of spectrum broadening is space charge effects [40]. However, space charge effects would also be observed as a blurring of the images, and these experiments were performed at intensities where no such blurring of the hot spots could be observed. Furthermore, by counting single electron events on the double MCP of the PEEM and comparing the number of counts on the CCD, we estimate that the brightest hot spots of the PEEM images in this paper consist of approximately 1000 electrons. With an acquisition time of 60 s and a repetition rate of 1 kHz, this corresponds to much less than one electron per laser pulse. Heating of the electron gas by the strong electromagnetic field has also been considered for electron emission. This mechanism is favoured for long pulses and small particles, since a higher density of defects gives a larger probability for electron scattering [24,25]. However, we cannot completely exclude the possibility of electrons undergoing electron-electron scattering within the duration of the laser pulse [41]. The shape of the electron spectrum is in good agreement with the observations of Schertz et al [13] observed using 800 nm laser light. In this case the emission was explained via field emission and modelled via a dynamic Fowler-Nordheim equation, followed by ponderomotive acceleration in the local plasmonic field. Generally, one can envision both pure perturbative multi-photon photoemission, tunneling field emission or some combination of the two [26], which can be hard to exactly quantify. Regardless of the exact mechanism, we can expect a clearly non-linear photoemission process simply because energy conservation requires at Fig. 4, and is displayed in Fig. 5. The inset (Fig. 5c) shows an SEM image of the one particle in the area that gives a detectable photoemission. The photoelectron spectrum from the particle is shown in Fig. 5d. The two data sets represent measurements of the same single nanoparticle repeated twice. As shown also in Fig. 4, the spectrum consists of two parts that each can be approximated with a straight line. The main difference between the two measurements is the intersection point between these lines: the first measurements show a plateau region in the spectrum that reaches approximately 0.4 eV higher than for the second measurements. This can be explained by laser drift giving rise to a lower intensity of the IR radiation, as is also indicated by the total photoelectron yield from the spot in the first measurement being ~20% higher than in the second measurement. The higher photoemission intensity corresponds to a higher surface electric field, which can also lead to more transferred energy to the emitted electrons and therefore push the plateau region of the spectrum towards higher energies. This is again consistent with electron emission beyond the pertubative regime of direct multiphoton photoemission [13,38,41]. Ag nanoparticles with these sizes exhibit dipole resonances that mostly span the visible spectrum, suggesting that the 1.55 µm radiation is off-resonance. However, we anyway observe a field enhancement from some of these particles but not from others, strongly indicating that some type of resonance condition is met for specific particles. This enhancement is completely uncorrelated with the average size and shape of the nanoparticles (from direct comparison of SEM and laser PEEM images of the same 47 particles), which is similar to what has been observed previously using 2-photon PEEM studies of a similar system [42]. We suggest that the observed field enhancement occurs in wedge-like features at the substrate-particle interface, especially when these are conductively connected. This is in contrast to other situations [2,13,31] where there is a thin insulating barrier between particle and metallic substrate. The existence of an enhanced field in the region of the substrate-particle interface is verified by electromagnetic simulations using two different methods. The RF module of COMSOL Multiphysics is used for finite element method calculations of the electric field around a Ag particle with dimensions corresponding to the one in Fig. 5, on top of a Au substrate. Direct contact between the particle and the substrate is ensured by cropping the bottom 1 nm of the structure, resulting in a flat, finite contact area. Just like in the experiment, the excitation has an incidence angle of 65 degrees, a wavelength of 1550 nm, and a polarization in the plane of the substrate. The situation is shown schematically in Fig. 6a. Fig 6b-c show the resulting field enhancement in different projections of the 3D system. The electric field enhancement in the plane of the substrate is seen in Fig. 6c, which shows a field enhancement factor of approximately 5. These results are also confirmed by finite-difference time-domain calculations of a similar system. A movie showing the electric field as a function of time during and after excitation by a 30 fs pulse in a plane at the Au surface (i.e. the same projection as in Fig. 6c) is presented as Supplementary information [43]. With the non-linear detection scheme of our experiment, a field enhancement factor of 5 can dramatically increase the probability of photoemission. Small changes in the exact geometry of the substrate-particle interface can then explain the appearance of only some nanoparticles in the PEEM images. A competing explanation would be field concentration at edges and small irregularities of the nanoparticle surface. While this could give rise to differences in field strength at the surface, it does not explain why no electron emission can be detected from the nanowire ends. As we discussed above, our studies together with previous studies of polyol synthesized Ag nanostructures indicate that we can have a combination of structures with insulating layers (especially the nanowires which are stabilized with PVP on their surfaces), while the pure Ag surfaces found on the nanoparticles could well form a conductive path with the substrate. In similar systems with metallic particles in contact or separated by a small gap, the charge transfer plasmons occurring across the conductive junction have been shown to shift to wavelengths in the near-infrared in a region including 1550 nm, and significantly longer than the visible-regime plasmons observed when an insulating layer exists between the particle and a metallic substrate [44][45][46]. Returning at this point to the electron emission scheme proposed by Shertz et al, this also depends on acceleration of the electrons in the strong fields in a gap between the particle and the surface. In our case, without a complete gap between the particle and the substrate, we note that nonetheless there will be enhanced field lines in the wedge-like region of the particlesubstrate interface, where the electrons are emitted. In contrast to the situation with an insulating gap, the wedge region at the conducting nanoparticle-substrate junction can have enhanced fields also for excitation by s-polarized radiation. The large variations in the detected signal can be explained by small differences in this wedge region, combined with the nonlinearity of the detection scheme which makes the PEEM signal extremely sensitive to the local field. We finally also note that our method is only dependent on the electromagnetic fields at the surface, and thus image both dark and bright plasmon modes, which is highly relevant for evaluating the plasmonic response of nanostructures. IV. CONCLUSION We have used PEEM to study near-field enhancement by Ag nanostructures using 1550 nm 30 fs laser pulses. The photon energy is less than one fifth of the work function of the material, and we observe electron emission up to approximately 3 photon energies above the vacuum level. The method is extremely sensitive to the local surface field enhancement, especially at the substrateparticle interface. This makes PEEM able to detect small differences in the field enhancement with high spatial resolution, differences that would be hard to observe with other existing methods. At intensities just below the space charge threshold, we note that the energy spectrum of the emitted electrons reaches up to 3 photon energies above the photoemission threshold. Our emission spectra are consistent with models derived for shorter wavelength photoemission [13,38]. Further studies of this type can lead to new insights about photoemission from nanostructures at intensities between the perturbative and strong-field regimes. An important future measurement is then to investigate the dependence of the photoelectron spectrum on the incident intensity. It is also desirable to improve the stability of the setup in order to better resolve spectral features in the energy-dependent measurements. Both of these future measurements would benefit greatly from an increased repetition rate of the laser system, which would increase the effective dynamic range of the measurement by making imaging at lower peak intensities possible. Finally, because of the technological importance of laser light at 1550 nm for telecommunication purposes, we also believe that this method can be useful in characterizing the near-field properties of optoelectronic components such as converters between optical and electrical signals.
2019-04-13T11:07:20.876Z
2015-02-24T00:00:00.000
{ "year": 2015, "sha1": "ff77726dc1b7fc5f2366c1e9b9b21dbcfe9bab7f", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1503.01875", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "2bfba414cf6999f58dc63e1682c038ea7848b9f1", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
55452255
pes2o/s2orc
v3-fos-license
The Complex Triad of Combinatorial Anticancer Therapy: Curcumin, p53, and Reactive Oxygen Species Cancer therapies based on single target molecules have proved to be ineffective both in terms of their desired action and associated undesired side effects. Combinatorial cancer therapies involve selection of different components with targeted effects, which can lead to a synergistic effect for anticancer therapy. Curcumin induces the expression of p53 and downregulates that of Mdm2, ultimately resulting in induction of apoptosis. Subsequently, there is an elevated expression of p53-induced genes, which activate reactive oxygen species (ROS) thereby establishing cellular communication and disposition of any aberrant cell by growth arrest or apoptotic cell death. As a whole, the triad of curcumin, p53, and ROS presents a unique and promising solution to the designing of modern and patient-specific cancer therapeutics. Introduction Curcumin (diferuloylmethane; molecular formula: C 21 H 20 O 6 , melting point: 183 °C) is a polyphenol substance derived from plants.Curcumin is a rhizome of the plant Cucuma longha and is one of the major components found in turmeric (vernacular name -turmeric/haldi; Fig. 1).The common solvents of curcumin include ethanol, dimethyl sulfoxide, and other organic solvents; it is mostly insoluble in water (Fig. 1).Although it has a varied day-to-day utilization as a food spice, its medicinal significance has motivated several scientific and clinical researches. 1 A few of the known attributes of curcumin highlight it as a novel candidate for treating diseases such as arthritis, inflammation, multiple sclerosis, Alzheimer, and cancer. 1 Previous documented studies and preclinical literatures have shown that curcumin can potentially inhibit the tumor formation in the animal model of carcinogenesis. 1,2However, identification of its specific and shared molecular targets still remains elusive. In addition, not much is known about what constitutes an effective patient/client response to the curcumin-based anticancer therapy.A possible reason that has surfaced over these years involves differential gene expression and modulation of the existing genomic and proteomic constitution of the patient. With curcumin exhibiting a wide range of pharmacological activities, the mainstream focus of cancer therapeutics has witnessed a translation from search for single-agent treatment to that of potent multimodal anticancer therapy. Apoptosis or programmed cell death is vital in limiting cell growth and regulating cell cycle in order to maintain homeostasis.There are two major pathways (ie, extrinsic and intrinsic) that the programmed cell death cumulates within a cell leading to the activation of the aspartate-specific cysteine protease (caspases). 3he extrinsic pathway involves engagement of death receptors that belong to the tumor necrosis factor (TNF) family, in the formation of the death inducing signaling complex. 4 conditional rise in the expression of these death receptors (ie, TNF receptors and FAS receptors) on specific cells and further stable conjugation with their complementary ligands (TNF-alpha ligand, FAS ligand) lead to activation of caspases.The caspase-based induction of cellular death deploys both initiator/activator caspases (caspase-8 and -9) and executioner caspases (caspase-3, -6, and -7). 4 The intrinsic pathway is triggered in response to DNA damage (strand breaks, base dimerization, etc) 5 and is associated with mitochondrial depolarization and release of cytochrome c from the mitochondrial intermembrane space into the cytoplasm.Cytochrome c, apoptotic proteaseactivating factor-1, and procaspase-9 are activated which in turn promote the activation of caspase-3.6 The process of DNA damage at the intrinsic level is often understood as an outcome of the negative effect of reactive oxygen species (ROS). ROS and free radicals, such as hydroxyl radical and hydrogen peroxide, are produced in the body as by-products of several metabolic pathways 7 and upon exposure to exogenous stress, such as ionizing radiations, air pollution, or external stressors. 6Several life processes involve the integration of multistepped redox pathways where ROS are recruited as secondary messengers. 7However, within the body, the systemic response to the level of these radical species is headed by a well-equipped antioxidant defense mechanism (involving enzymes) that counteract the ROS. 8Therefore, any failure in the activation or enhancement of these antioxidant defendants can lead to an elevated oxidative stress and in turn higher cellular damage and death.Therefore, we can correlate the role of ROS in the transformation of a normal cell into a tumor cell and its metastatic progression into cancerous cells. One of the most ubiquitous physiological aberrations associated with cancer pathology is dysregulation of cell cycle and an uncontrolled event of cell proliferation.Tumor suppressor proteins occupy a pivotal position in maintaining genomic integrity.Therefore, our search for molecular regulators ended up highlighting p53 (tumor suppressor gene) 8 and its corresponding negative regulator, ie, Mdm2 (Mouse double minute 2 homolog -an inhibitor protein) as two such unique candidates.p53, a proapoptotic gene, 9 is known to dispose a cell for programmed cell death (apoptosis) via a cascade of signaling pathways.On the other hand, Mdm2 exhibits a putative regulation of cell growth and death by altering the transformation of normal cells into tumor cells.Thus, for therapeutic innovations targeting cancer cell growth, it will be wise to explore this keynote interplay of apoptosis and cell cycle in the presence of curcumin. Although the effect of Mdm2 on regulation of cell cycle has been elaborately characterized in the form of its p53 targets, p53-independent targets have been gaining momentum and are in focus very recently. 8This review will summarize our current understanding of Mdm2-and p53-based regulation 9 of cell death via differential signals involving ROS-mediated pathways. 9In addition, most of the available experimental data and literatures have ignored the interaction of curcumin with each of these molecules (p53 and Mdm2), and critically, how the interplay of these three can be an effective anticancer therapy needs more discussion and understanding. Multimodal Action of p53 sensation, altercation, and autoregulation.p53, a tumor suppressor protein, plays a key role in the regulation of several cellular processes, including the cell cycle, apoptosis, DNA repair, angiogenesis, and antioxidant defense mechanism. 10he apoptotic function of p53 is critical for tumor suppression and reconstitution of inactive p53. 10 It actively agitates physiological response to cellular stress, such as hypoxia, DNA damage, and oncogene activation, and has the ability to eliminate excess, damaged, or infected cells by inducing apoptosis.Therefore, p53 is critical for proper regulation of cell proliferation in multicellular organisms. Although its cellular level is low in homeostatic conditions, there is a significant rise in the level of p53 after sensation of any physiological stress condition.The p53 protein induced in stressed cells shuttles into the nucleus where its action as a transcription factor induces the expression of several downstream genes such as Bax (Bcl-2 associated X protein), GADD45 (the growth arrest and DNA damage 45 protein), and p21, 11 which largely come under the group of apoptosis-inducing and tumor growth-inhibiting genes. 12It would be convincing enough to hypothesize that such a regulator of cellular growth must have an autoregulatory feedback loop, to regulate its own cellular levels. And given the case that in normal cellular conditions p53 is expressed at low level, it justifies that its negative regulator protein must be upregulated in similar conditions.Therefore, in normal conditions, expression of p53 induces the expression of Mdm2 oncogene, which acts as a negative regulator of cellular levels of p53. MdM2 regulation of p53 From transcriptional inactivation to proteosomal degradation.MDM2, an oncoprotein, is coded by Mdm2 gene (which is Hdm2 in human beings) and acts as an E3 ubiquitin ligase.It is known to target p53 and thus commits it to proteosomal degradation (short-lived protein) in normal conditions.This action of Mdm2 requires its shuttling out of the nucleus via activation of nuclear export signal 12 and thereby causing the rise in the cytosolic levels of Mdm2. 13uring this cytosolic localization, there is a rapid decline in the level of p53.One such interesting experiment performed by Freedman et al exhibited the role of NES activation in Mdm2 and subsequent degradation of p53 in vivo even leading to low and steady levels of p53. 14part from regulating p53 protein levels, Mdm2 also exerts an active inhibition of transcriptional levels of p53 by binding to its transactivation domain.Several papers have confirmed this inhibitory role of Mdm2 both in in vitro and in vivo assays.Moreover, Mdm2 interacts with several tumor suppressor proteins, including retinoblastoma, p21, p19/14 ARF , E2F1 (E2F transcription factor 1), p73, and Mtb (Mycobacterium tuberculosis). 13,14These proteins constitute for p53-independent targets of Mdm2, which can also prove to be Anticancer effects of curcumin evidences and preliminary insights.Cancer drug development over the past decade has consolidated our focus on the modulation of specific targets, mostly one at a time (genes or proteins leading to dysregulation of cell growth and proliferation pathways).Emergence of the new generation of combinatorial and patient-specific cancer drug designing has led to the development of effective and targeted cancer therapeutics, where multiple carcinogenic modalities are under focus. Indeed, cutting-edge molecular biology-based research has strengthened our claim for curcumin's role in the disruption or restriction of specific cancer-causing molecular mechanisms (transformation, proliferation, and metastasis).Furthermore, even in the case of drug-resistant tumor cell lines, the response has been positive to suppress the growth of tumor. 14urcumin exerts highly cell-specific and contextdependent regulation of cellular reproduction.In already existing tumors, curcumin tightly regulates the molecular signaling involved in cell cycle.This helps to enhance the cellular population of healthy control and stops the uncontrolled proliferation of new tissues. One of the major advantages of curcumin-based anticancer therapy is its minimal side effect.This can be summarized as negligible off-target effects of curcumin along with noninvasiveness toward neighboring healthy tissues. 15It is also known to destroy tumor growth by making these tumors more susceptible to pharmacologic cell-killing treatments. 16In addition, curcumin modulates tumor suppressor pathways by triggering mitochondrial-mediated cell death in tumor tissues and thereby increasing the death of cancer cells. 17In its multimodal approach, curcumin results in the starvation of tumors of their vital blood supply by blocking angiogenesis and vasculogenesis. 18It has proved to be effective in opposing many of the processes that permits the spread of metastatic cancer cells.These multitargeted actions are central to the capacity of curcumin to block multiple forms of cancer by targeting different stages of tumor growth and cellular plasticity. 16,17urcumin inhibits interaction of MdM2 and p53.Curcumin binds to Mdm2 and leads to the loss of hold on p53.As a consequence, p53 that was otherwise held inactive by Mdm2 (ubiquitn ligase protein) gets reactivated and translocates into the nucleus (Fig. 2). 18As a result, specific pathways such as cell cycle repair, antioxidant defense mechanism, and other apoptotic pathways are called into action (Fig. 2) and cellular homeostasis is restored.Several experiments of Lin et al over the past decade have proved the correlation of curcumin with cancer inhibition in different clinical models of cancer.One important contribution of the group has been their work on proposing curcumin as a dietary supplement for combating cancer and thus has been of immense promise to the scientific community.However, most of these effects have been characterized in clinical models of cancer and in in vitro experiments where the exact role play of molecular modulators have been largely ignored. 19Taking this cue, it can be hypothesized that curcumin can have much better and finer chemopreventive and chemotherapeutic impressions than that earlier reported in several experiments conducted by Li et al (Fig. 2). Literature review suggests that multiple experimental approaches and therapeutic paradigms have established curcumin as a chemotherapeutic agent.One such study involves the designing of a nanoparticle-based drug delivery system for curcumin at the site of specific cancer cells.In addition, several efforts are ongoing to improve the efficacy of curcumin by developing derivatives of curcumin, eg, Pculin02H. 20owever, much recently, attention has been shifted in understanding the molecular regulation of this process.Curcumin negatively regulates MDM2 protein resulting in destabilization of the interaction of Mdm2 with p53 (Fig. 3). 21The then released p53 tries to repair DNA damage by activation of p21 and 14-3-3 and subsequent arrest of cell cycle at G1 and G2 22 check points, respectively (Fig. 3). 23As a result, there is cellular induction of apoptosis.Interestingly, the exact mechanism driving this p53 guided cell cycle arrest or apoptosis is only partially understood and is a subject of further study. Various factors that are known to influence this decision of cell cycle arrest or apoptosis include the level of p53 expression (Fig. 3), the nature and type of stress signal, the cell type, and the cellular state when under stress. 23urcumin-induced extrinsic apoptotic pathway is mediated by death receptors like Fas and TNF, which involves cleavage of BH3-interacting domain death agonist (BID) 24 by activated caspase-8 (initiator caspase; Fig. 3).This results in the release of cytochrome c 25 from the mitochondria, and subsequent activation of caspase-3 (executioner caspase) resulting in apoptosis. 25However, the role of curcumin in mitochondrial regulation of antiapoptotic proteins such as B-cell lymphoma-2 (Bcl-2) or B-cell lymphoma-extra large (Bcl-xl) is still unresolved and needs more investigation (Fig. 3).curcumin and inflammation.Curcumin blunts cancercausing inflammation and thereby reduces the level of inflammatory cytokines throughout.This is achieved by blocking the inflammatory master molecule nuclear factor-kappaB (NF-κb). 26Curcumin inhibits a crucial inflammation protein called NF-κb, which ultimately reduces the site-specific onset of allergic reactions. 25ost anti-inflammatory drugs try to prevent the onset of inflammatory reactions by inhibiting specific inflammatory enzymes.Curcumin is known to work effectively against cyclooxygenase-2 (COX-2) and lipoxygenase (LOX), which are molecular modulators of cellular inflammation and lead to DNA damage. 26Regular intake of curcumin is known to significantly reduce the levels of inflammation-triggering enzymes.In human beings, a dosage of 200 mg/kg of body weight of curcumin can prevent hyperallergic inflammations. 27urcumin downregulates the production of dangerous and advanced glycation end products that could trigger inflammation otherwise leading to cancerous mutations. 27 The complex triad: curcumin, ros, and p53 In response to cellular stress resulting from accumulated ROS, extensive DNA damage is a common outcome.Although the wild-type p53 orchestrates transcription of numerous genes and decides the fate of these stressed cells, the broad outcome of stressed cells includes arrest of cell cycle, senescence, and apoptosis.The role of curcumin in regulating biological redox system seems complex and paradoxical.Substantial literature based on in vivo and in vitro experiments have established curcumin both as an antioxidant and a prooxidant. 28Ironically, curcumin scavenges ROS and counterbalances endogenous redox system, whereas it is also known to incite cellular ROS production (Fig. 4). 29owever, in noncancerous cells exposed to curcumin, with an increase in ROS (redox activation), 28 there is a concomitant upregulation of p53 and, ultimately, resulting in increased apoptosis.Similar association of ROS and curcumin-induced apoptosis in malignant cells was reported by Yoshino et al. 30 Therefore, in our search for an effective anticancer therapy, we have developed a broad understanding of curcumin, p53, and ROS interactions and have named these three interlinked components as "the complex triad (TCT)" (Fig. 4).The word complex is to emphasize the nondirect and mediator involved in the interaction of these molecules where it is still hazy to decipher, if each of these molecules exerts a feedback regulation on the other.Curcumin downregulates MDM2 and thereby upregulates the transcriptional influence of p53 on several target genes.One line of action involves the transcriptional activation of p21 and 14-3-3 expression, which leads to cell cycle arrest.in addition, it also upregulates proapoptotic genes such as Fas, Bax, and p53-induced genes (pigs), which lead to activation of caspases and ultimately end with cellular apoptosis.Abbreviations: 14-3-3-α/β/σ, proteins regulate the cell cycle; casp, caspase (cysteine-aspartic protease); pig, p53-induced genes; rOs, reactive oxygen species; cyt-C, cytochrome c; apaf, apoptotic protease-activating factor-1.Oxidative stress has been correlated with different cell state directed cell rewards, such as cell cycle arrest, DNA repair, and apoptosis (Fig. 4).The excessive generation of ROS in mitochondria resulting from the treatment with chemotherapeutic agents lead to apoptosis, while oxidative stress in the nucleus directs cell to p53-dependent DNA repair pathway. 31uring several biological processes involving redox reactions, PIGs (Fig. 3) activate ROS, 31 which as a secondary messenger molecule moves into the mitochondria and interacts with the resident signaling molecules (Fig. 4).Such an interaction results in imbalanced mitochondrial membrane potential and thus cytochrome c gets released from the mitochondria. 32Discrete response patterns advice that multiple biological pathways exist, which lead to an overall integration of p53 signaling (Fig. 5). Annotation of Multiple Molecular targets of curcumin Not surprisingly, tremendous ongoing research on cancer over the past few years have established that carcinogenesis is a multistep process involving abnormal functioning or dysregulation of varied molecular regulators.These include that of the growth factors, growth factor receptors, transcription factors, cytokines, apoptosis, and proliferation genes. 33recisely, it is an overall loss of the cellular homeostasis influenced by both genetic and environmental makeup of a person.As a whole, this leads to the cellular onset of cancer growth and clinical manifestation in the form of metastasis.Some specific molecules are described here for better understanding of potent with targets of curcumin and how they can be incorporated in our search for developing a curcumin based anticancer therapy. 34This paper aims to reinforce our present understanding of the role of curcumin as a potent anticancer drug via these target molecules. 19f the several molecular targets of curcumin, Cyclin D, Akt, and NF-κb are major regulators of cell growth and differentiation, thereby determining the process of cell cycle and survivability. The epithelial growth factor receptor and fibroblast growth factor-mediated signaling of phosphatidylinositol-3-kinase (PI3K)-Akt pathway is a major cell survival pathway.The dysregulation of this pathway has been implicated in several cases of tumorigenesis and significantly disturbed in metastatic cancers.Reports have linked the concentration and time of exposuredependent interaction of curcumin with that of PI3K/Akt pathway, where the net outcome is the inhibition of Akt pathway. At molecular levels, this inhibition is known to be achieved by limiting the phosphorylation of mammalian target of rapamycin (mTOR), Akt, and downstream substrates involved in the Akt pathway.An elaborate study by Yu et al highlighted a possible restoration of this negative effect of curcumin by upregulating Akt or siRNA-mediated gene silencing of tuberous sclerosis protein 1 (TSC1) and tuberous sclerosis protein 2 (TSC2; Fig. 6). 35TSC1 and TSC2 are the major components of mTOR signaling pathway and are putative sites for modulation of cancer cells. 35OX-2 and LOX are the major molecular regulators of cellular inflammatory reactions, and curcumin inhibits inflammation by downregulating the expression of these mole cules (Fig. 6).These inflammatory reactions otherwise could have resulted in cellular damage and intrinsic damage of DNA leading to cancer progression. 19NF and Bcl (B-cell lymphoma) group of proteins constitute proapoptotic proteins, which activate caspases (activator and executioner caspases; Fig. 6) leading to onset of apoptosis. 35erceptin 2 and androgen receptors are commonly correlated with the onset of breast cancer and therefore are active targets of curcumin.This makes curcumin an effective therapeutic target for combating breast cancer (Fig. 6).Pculin02H, a curcumin derivative, has been effective in inhibiting proliferation than that of curcumin and thus highlights the development of curcumin derivatives, which can potentially target the growth of cancer cells. 20nclusion Curcumin and p53 have long been known as anticancerous agents.MDM2 (primary cellular inhibitor of p53 activity) was found to hinder p53 through direct protein-protein interaction, ie, Mdm2-p53 and regulation during p53 transcription.We suggested that for this particular protein-protein interaction, it may be feasible to design a potent, nonpeptide, drug-like small-molecule inhibitor using curcumin as one of the key components to block the p53-MDM2 interaction.It is generally considered that the ROS may promote either cell proliferation or cell death activity depending on the intensity/ location of the oxidative collapse and the activity of the antioxidant system.The excess generation of ROS in mitochondria results from the treatment with chemotherapeutic agents giving rise to apoptosis, while oxidative stress in the nucleus directs cells to p53-dependent DNA repair. However, limitation in understanding the molecular basis of this regulation has impacted on its transition from theory into practice.Other than traditional medicine in a few countries, the modern medicine has largely ignored this anticancerous drug in its mainstream cancer treatment strategies.In addition, the molecular interactions of curcumin, p53, and MDM2 have also opened up avenues for drug designing and development.Targeting this co-interaction model with significant inhibition on cancer will require further characterization of these therapies that can be developed into a vaccine or an adoptive cell transfer method to fight against cancer.Therefore, understanding the molecular basis of curcumin's action on cancerous cells will lead to newer studies as to how these can be incorporated to the conventional treatment and pharmacological formulations. figure 2 . figure 2. inhibitory effect of curcumin on Mouse double minute 2 homolog (MdM2) and p53 interaction.Curcumin binds to MdM2 and results in the disassociation of p53 and downstream transcriptional regulation of target gene expression.the disassociated p53 also interacts with other proteins and heralds programmed cell death (apoptosis). figure 4. the complex triad of curcumin, p53, and rOs: exhibiting the complex interlinking relationship between the three key regulator molecules in the modulation of cancer cells and their microenvironment. Common molecular targets regulated by curcumin directly or indirectly.
2018-12-10T23:47:47.972Z
2015-11-15T00:00:00.000
{ "year": 2015, "sha1": "7523d5ca6c817fc71bcacf724b91bf8a6234ee5a", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.4137/CMT.S33407", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "7523d5ca6c817fc71bcacf724b91bf8a6234ee5a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
139485095
pes2o/s2orc
v3-fos-license
Preparation of silicon surface pyramid arrays and modification of thin gold film for surface-enhanced Raman scattering Mon crystalline silicon surface microstructure is prepared by 15wt% K3PO4 and 3wt% K2SiO3 solutions at 90 °C for 42 min. The monocrystalline silicon surface microstructure obtained high coverage, uniformity and low reflectivity. The surface-enhanced Raman scattering (SERS) substrate based on silicon nanoporous pyramid arrays/gold film (Si/Au) is prepared by ion sputtering. Methylene blue (1 × 10−5 mol/L) is used as a probe molecule, and the enhancement effect of Si/Au SERS substrate is investigated. The results show that silicon surface pyramid arrays have high Raman enhancement effect when deposition time is 5min. Furthermore, the applied of the Si/Au SERS substrate in Raman detection of melamine (1 × 10−3 mol/L), the Raman characteristic peaks of melamine is enhanced obviously. These results indicated that the Si/Au SERS substrate had potential application value in food safety and the detection of chemical dyes and contraband. Introduction Surface Enhanced Raman Scattering (SERS) technology has been widely used in physics, materials science, biology and environmental science [1,2]. In 1974, Fleischmann et al. study the Raman scattering experiment for the first time using pyridine as the Raman active material on the silver electrode [3]. It was found that the enhanced Raman scattering signal of pyridine was obtained on the rough silver electrode surface. With the rough surface, the silver electrode could adsorb more molecules and then emerged the enhancement effect of Raman scattering. In 1977, through theoretical and experimental observations, Van Duyne et al. found that the Raman scattering which was generated by the pyridine molecules adsorbed on the surface of the silver electrode was enhanced by 10 5 to 10 6 times compared to the normal Raman spectrum [4]. The pyridine molecules can be adsorbed on rough metals (gold, silver, copper) surface, and rough metal surface facilitates the excitation of surface Plasmon [5]. The SERS phenomenon is also observed on other rough metal surfaces, such as vacuum-evaporated metal island films, chemically prepared metal sols, coating metals film and etc. Guina Xiao and his team prepared SiO2-Au core cap nanostructure arrays on glass substrates by dip coating and wet chemical reduction [6]. Bo-Kai Chao prepared a high SERS substrate in which plasma gold Nanodroplets are fabricated by wet etching and island photolithography [7]. K. Leinart sedimentated a layer of aluminum (100-500nm) thin film on a (100) oriented Si wafer by magnetron sputtering [8]. P. P. Zhang prepared large-area and highly uniform Si nanotaper array modified with Ag or Au/Ag nanoparticles as a Raman substrate [9]. Another important application of the metal SERS effect was the probe tip of the metallized film. 1234567890''"" In this paper, a method for preparation of Raman substrates is reported. Using 15wt% K3PO4 and 3wt% K2SiO3 solutions at 90 °C for 42min, the monocrystalline silicon surface microstructure with small size, high coverage, and good uniformity is prepared by anisotropic etching on the monocrystalline silicon surface [10,11] . The Si/Au SERS substrate is prepared by ion sputtering. In order to optimize the substrate enhancement effect, the influence of sputtering gold film thickness is discussed. MB is selected as the probe molecule, and measure the SERS activity of melamine. There are compared with MB and melamine on normal Raman spectra and their SERS adsorbed on Si/Au SERS substrate. Materials Hydrofluoric acid (HF), ethyl alcohol absolute (C2H6O), potassium phosphate tribasic (K3PO4) and potassium silicate (K2SiO3) are used without any further purification. Methylene blue (MB) purchased from Shanghai Xin Chemical Co. Ltd. ultrapure water is used for all solution preparation and experiments (resistivity>18.2MΩ·cm). MB solution is diluted to various concentrations of 1×10 -4 mol/L and 1×10 -5 mol/L with ultrapure water. Melamine solution is diluted to 0.01mol/L with ultrapure water. Monocrystalline silicon wafers of P-type. <100> oriented and size 15mm × 15mm with resistivity 1~3Ω·cm are used as the etching experiments. The UV-Vis spectra of silicon wafer surface reflectivity data are recorded using a UV-Vis spectrophotometer (SHIMADZU UV-2600) equipped with an integrating sphere. The size distribution, uniformity and morphology of monocrystalline silicon wafers are examined by FEI Company (USA) scanning electron microscope quanta 250 FEG (SEM). Raman spectra are recorded between 300 and 1800 cm -1 with a 785 nm excitation Raman spectrometer (portable Raman Series, QE65Pro). Cressington 108 Auto /SE Sputter Coater (USA). Preparation of monocrystalline silicon surface microstructure: After proper cleaning in our group's own way [12], we used monocrystalline p-type silicon wafers, oriented <100> and size 15mm×15mm to prepare pyramid structure of silicon surface by chemical wet etching method. We maintained the temperature at 90 °C in the whole process. Two steps are as follows: (1) As-cleaned silicon samples are dipped in the capped vessel containing 15wt% K3PO4 and 3wt% K2SiO3 solution for 36min to etch [13]. (2) After completing the etch process, monocrystalline silicon wafers are washed into ethanol and ultrapure water, and cleaning in ultrasonic cleaner for 5min, respectively. Finally, the etched samples are rinsed in ultrapure water and then drying in the oven. Preparation of Si/Au SERS substrate: Using the physical method of ion sputtering deposits gold film on silicon surface pyramid arrays, and the sputtering time is 3min, 4min, 5min, 5.5min and 6min, respectively. As a result, the uniformly decorated Au nanoparticles on pyramid arrays are achieved. In order to keep the same thickness of gold film, each silicon wafer is kept in the same sputtering position of the sputtering process. Fig. 1 shows the fabrication of Si/Au SERS substrate. Results and discussion The SEM images of the monocrystalline silicon wafers surface microstructure are shown in Fig. 2. It is exhibited that the pyramid with small size, high coverage, and good uniformity. Fig. 3 shows the distribution of the pyramid size under the etching process condition 15wt% K3PO4 and 3wt% K2SiO3 solutions at 90 °C for 36min. We clearly discovered that most pyramids size are between 2μm to 6μm. With the size of the average size of the pyramid is 4.8μm. Furthermore, the density of pyramid is 92.8%, and the pyramid size distribution range is 0.97~11.39μm. Fig. 4 (a) is the silicon surface reflectance spectrum, (c) is the silicon wafer etched surface reflectance spectrum, and (b) is the silicon wafer surface with the coated gold film reflectance spectrum. It is worth noting that the spectra of (a) and (c) are very similar, and the surface reflectivity of (c) is reduced. Fig. 4 (b) reflectance between 380 and 550nm are reduced. The SEM image of Si/Au SERS substrate at different magnifications is shown in Fig. 5 as expected, the Si/Au SERS substrate still retains the pyramid structure. From the enlarged SEM image, we can observe that the gold film roughens the pyramid surface. It's got more hot spots for Si/Au SERS substrates. This roughened pyramid surface is greatly enhancing the Raman signal. In order to obtain the best Si/Au SERS substrate, we studied the different coating times on the silicon etched surface. The thickness of gold film can be controlled by sputtering time. MB, a phenothiazinyl dye, was used as a probe molecules to demonstrate the property of the substrate. We dropped MB aqueous solution with concentrations 1×10 −5 mol/L on the surface of Si/Au SERS substrate, then dried in the air naturally. Fig. 6 is the SERS spectrum of the MB for comparison. Curves a, b, e, and d shows the only part characteristic peaks of the MB. It can obviously show the characteristic peak of MB well and the characteristic peak 520 cm -1 of the silicon wafer is quenched in curve c. The optimal sputtering time was 5min. The normal Raman spectrum of MB with integration time of 20s and SERS spectrum that the MB molecules are adsorbed on the Si/Au SERS substrate, which are shown in Fig. 7 curve (a), (b). Curve (a) shows part of characteristic Raman bands of MB molecules. All of the vibrational modes of characteristic peaks of MB molecules are obviously shown in curve (b). The background fluorescence of the curve b at 520 cm -1 with the silicon wafer itself has been eliminated. The peak Raman shifts, relative intensity and peak attribution of methylene blue are listed in Table.1, compared with the results reported in the literature [14]. The SERS spectrum of MB on these substrates consists with those of references, part of the positions of vibrational peaks occur shifting. No peak attribution can be found for 1326, 1227, 1151 and 949 cm -1 . It may cause by the interaction between probe molecules and the Si/Au SERS substrate. The other reason is that the scattering light may be affected by the intensity [13]. The characteristic peak of 447 cm -1 of MB absorbed on the substrate is matched with the corresponding MB aqueous solution. However, the bands at 773, 1187, 1304, 1407 and 1630 cm -1 in the Fig. 7 (curve a) spectrum are shifted to 771, 1182, 1326, 1397 and 1621 cm -1 in the SERS spectra, respectively. In addition at 499, 612, 667, 888, 949, 1043, 1121, 1151, 1227, 1418 and 1502 cm -1 does not appear in normal Raman spectrum of MB. However, the peak intensities of MB adsorbed on Si/Au SERS substrate are stronger than those MB normal Raman spectrum, especially the peak at 447, 771, 1397 and 1621cm -1 . Therefore, this substrate enhances the Raman signal and eliminates fluorescence interference effectively. [15] υ(C-C) 1597(w) [16] υ(C-C) 1502(w) 1513(w) [16] υ asym(C-C) 1418(s) 1427(m) [14] υasym [14] β(C-H) 667(w) 670(w) [17,18] γ(C-H) 612(s) 612(m) [16] δ(C-S-C) 499(w) 502(m) [19] δ(C-N-C) 447 447(s) 449(m) [17] δ(C-N-C) Abbreviation: s, strong; m, medium; w; weak; υ, stretching; α, in-plane ring deformation; β, inplane bending; γ, out-of-plane bending; δ, skeletal deformation. In the same conditions, we further investigated the Si/Au SERS substrate property by detecting aqueous melamine. Fig. 8 shows the SERS signals of the Si/Au SERS substrate (curve a), melamine aqueous solution at 1×10 -3 mol/L absorbed on Si/Au SERS substrate (curve b), and pure melamine powder (curve c). Fig. 8 (a) shows Raman spectrum of silicon wafers at 520 cm -1 can be observed easily between 400 and 1100 cm -1 . Spectrum of 520 cm -1 on curve (a) is assigned to the characteristic of silicon. The curve (c) shows the Raman spectra of melamine powder. All of the vibrational modes of melamine can be observed. The peaks appearing at 583, 676, and 984 cm -1 are the characteristic Raman peak of melamine. The SERS spectrum of melamine aqueous solution absorbed on these substrates closely matches with Raman spectra of melamine powder. The strongest peak at 675 cm -1 is involving the in-plane deformation of the triazine ring, and definition to ring-breathing II mode [20] . Another strong Raman peak at 984 cm -1 is attributed to the ring-breathing mode of the triazine ring, and related to the ring of nitrogen atoms [21]. Conclusion In this study, silicon wafer surface microstructure of pyramid arrays has been prepared by a wet chemical etching process from silicon wafer surfaces. The Si/Au SERS substrate is fabricated by the sputter gold film on the silicon wafers surface. By sputtering gold film optimum time was 5 minutes, we got more hot spots for Si/Au SERS substrates, which means the Raman enhancement effect is higher. Raman signals is detected of low concentrations of (1×10 -5 mol/L) methylene blue and (1×10 -3 mol/L) melamine aqueous solution, which are adsorbed on Si/Au SERS substrates, respectively. In addition, this substrate has great advantages in manufacturing and morphology controllability. These results indicate that the Si/Au SERS substrate has potential application in food safety and the detection of chemical dyes and contraband.
2019-04-30T13:08:04.914Z
2018-08-07T00:00:00.000
{ "year": 2018, "sha1": "76535932afe5dd087d5f40ba5db8e73acdb7d7c0", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/394/2/022049", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "f4cc23254a8cb1432c817fdec87bdf20cf50995e", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
236163115
pes2o/s2orc
v3-fos-license
Antigen-Specific Treg Therapy in Type 1 Diabetes – Challenges and Opportunities Regulatory T cells (Tregs) are key mediators of peripheral self-tolerance and alterations in their frequencies, stability, and function have been linked to autoimmunity. The antigen-specific induction of Tregs is a long-envisioned goal for the treatment of autoimmune diseases given reduced side effects compared to general immunosuppressive therapies. However, the translation of antigen-specific Treg inducing therapies for the treatment or prevention of autoimmune diseases into the clinic remains challenging. In this mini review, we will discuss promising results for antigen-specific Treg therapies in allergy and specific challenges for such therapies in autoimmune diseases, with a focus on type 1 diabetes (T1D). We will furthermore discuss opportunities for antigen-specific Treg therapies in T1D, including combinatorial strategies and tissue-specific Treg targeting. Specifically, we will highlight recent advances in miRNA-targeting as a means to foster Tregs in autoimmunity. Additionally, we will discuss advances and perspectives of computational strategies for the detailed analysis of tissue-specific Tregs on the single-cell level. INTRODUCTION The body's immune system has evolved to effectively defeat and destroy infiltrating foreign pathogens. In order to prevent autoimmune reactions directed against the body's own cells, our immune system employs sophisticated mechanisms of self-tolerance. On the T cell level, selftolerance is executed in the thymus by deletion of T cells with self-reactive TCRs (central tolerance). Outside of the thymus, peripheral tolerance is maintained by specialized cells, including so-called regulatory T cells (Tregs). Tregs are characterized by the high expression of the interleukin-2receptor-aplpha chain (CD25) and the transcription factor Foxp3, which is the master regulator of Tregs phenotype and function (1)(2)(3)(4). The critical importance of Tregs for the maintenance of self-tolerance is illustrated by severe multi-organ autoimmunity in humans with the immune dysregulation, polyendocrinopathy, enteropathy, X-linked syndrome (IPEX) (5) and mice with Scurfy mutations (6), both resulting from mutations in the Foxp3 gene. Tregs develop in the thymus, referring to thymic Tregs (tTregs), and harbor a TCR repertoire that is skewed towards self-antigens. Additionally, Tregs can likewise be induced in the periphery in an antigen-specific manner, so called peripheral Tregs (pTregs), with a TCR repertoire different from their tTreg counterparts (7). Considerable research has been conducted in order to induce diseaserelevant antigen-specific Tregs with the goal to restore mechanisms of tolerance and interfere with unwanted immune reactions in allergies and autoimmunity. Accordingly, we and others have shown that Treg induction requires stimulation via the TCR and it has become apparent that fine-tuned TCR signals are needed to efficiently induce Tregs (8)(9)(10)(11). Here, we will discuss promising results for antigen-specific Treg therapies in allergy and specific challenges for such therapies in autoimmune diseases, with a focus on type 1 diabetes (T1D) as well as opportunities for antigen-specific Treg therapies in T1D. ADVANCES IN ANTIGEN-SPECIFIC TREG THERAPIES IN ALLERGY Antigen-specific therapy is a long-envisioned goal for the treatment or prevention of autoimmune diseases. The ability of Tregs to regulate immune responses not only via direct inhibition of effector T cells with the same specificity but also via modulation of antigen-presenting cell (APCs), a process called bystander suppression, makes Tregs an important target for tolerizing therapies (12). Currently, approaches based either on the expansion, manipulation and transfer of autologous Tregs as well as the in vivo induction with antigen are extensively studied. While the ex vivo expansion of polyclonal Tregs has proven to be safe in the clinic the efficacy is largely dependent on disease-relevant antigen-specific Tregs. However, their very low frequency in the case of autoimmune diseases necessitates the manipulation of Tregs before transfer [reviewed in (13)]. This includes the forced expression of FOXP3 in autoantigen-specific effector T cells as well as the expression of disease relevant TCRs on isolated Tregs [reviewed in (13)]. Although results from preclinical studies are promising, the long-term fate of these engineered Tregs is not fully understood and especially the differentiation into pro-inflammatory lineages might be a safety concern. The alternative of induction of Tregs with antigen administered directly to the patients is more costeffective and its safety has been demonstrated in a variety of clinical trials. Even though clinical translation of such tolerizing therapies has been challenging, several examples relying on different forms of antigen-delivery and tolerization protocols from pre-clinical and clinical trials highlight the potential of such strategies. Desensitization to allergens is a common practice for the treatment of severe allergies. However, only a few studies have addressed the effect of such antigen-specific desensitization protocols on Tregs. Importantly, oral immunotherapy with peanut proteins in allergic patients led to an increase in peanut protein-specific FOXP3 + Tregs within peripheral blood mononuclear cells (PBMCs) 6 and 12 months after the treatment started (14). Interestingly, in a follow-up study focusing more specifically on Tregs, it became evident that the increased frequencies of peanut-protein specific Tregs were associated with enhanced DNA demethylation of the FOXP3 locus (15), a measure for maintenance of FOXP3 expression and therefore for the stability of the Treg phenotype (16). These findings highlight that antigen-specific therapy can not only enhance Treg frequencies but also positively affect Treg characteristics including their stability. CHALLENGES FOR ANTIGEN-SPECIFIC TREG THERAPY IN AUTOIMMUNITY AND T1D Autoimmune diseases like T1D affect millions of people worldwide with a steadily rising incidence. Currently, curative treatments for autoimmune diseases do not exist and available therapies rely on the treatment of symptoms often involving immunosuppressive reagents that can have severe side effects. The antigen-specific induction of disease-relevant Tregs offers the opportunity to restore natural tolerance mechanisms in the absence of immune side effects induced by general immune suppression and is therefore a long-standing goal for the treatment or prevention of autoimmune diseases. We were able to demonstrate that in the peripheral blood of children at risk to develop T1D, insulin-specific Treg frequencies are reduced during the onset of islet autoimmunity, while higher frequencies are associated with a slow progression to clinically overt T1D (17). These findings directly support the concept of inducing these insulin-specific Tregs to delay the progression to clinically symptomatic disease. However, the translation of antigen-specific Treg therapies for autoimmune diseases into the clinic remains challenging and most studies using oral insulin treatments for tolerization in T1D conducted so far failed to meet their primary outcome (18,19). Nevertheless, post-hoc analysis revealed a delay in progression in a subset of these treated participants (20). One analytical caveat of clinical trials studying Treg therapies has been the divergence of protocols for Treg identification in peripheral blood. While in the mouse setting Foxp3 is expressed exclusively by Tregs, human effector T cells can transiently express intermediate levels of FOXP3. Accordingly, most researchers characterize human Tregs as CD25 + CD127 low FOXP3 + . It has become apparent though, that even those more stringently defined Tregs are heterogeneous in their composition. Not only can Tregs co-express classical effector T cell transcription factors (e.g. TBET, RORC, GATA3) which affects their migration and function, but they also vary in their activation state and functionality. This is especially evident in the divergent expression of CD45RA, with CD45RA -Tregs being antigen-experienced and having a higher suppressive activity [reviewed in (7)]. According to this heterogeneity, divergent markers have been used for the identification of Tregs in clinical trials which contributes to the difficulties in assessing translatability. Importantly, researchers are starting to analyze antigen-specific immune responses in such clinical trials in more mechanistic detail, which will help to define critical parameters, such as the optimal dosing of oral insulin. Additionally, other factors need to be critically considered, including the route of administration and the chosen antigen but also the time point of administration within the disease course. We know from murine studies that the efficient de novo induction of Tregs from naïve T cells in vivo requires the stimulation with a strong-agonistic ligand for the TCR supplied under subimmunogenic conditions (8,9). Higher immunogenic doses of antigen on the other hand activate the Pi3k-Akt-mTOR pathway, thereby directly inhibiting Treg induction (10). We used immunodeficient HLA-DQ8transgenic NOD-Scid-IL2Rg knockout (NSG) mice reconstituted with human hematopoietic stem cells to study requirements for human Treg induction in vivo. Importantly, these humanized mice develop a functional human immune system, including the positive selection of autoreactive insulinspecific CD4 + T cells in the thymus (17,21). Using this system under steady state conditions in the absence of autoimmune activation, we were able to demonstrate that, similar to the murine setting, subimmunogenic doses of strong-agonistic insulin variants are able to induce human Tregs in vivo (17). In contrast to the steady state, we demonstrated that during the onset of islet autoimmunity the capacity to induce Tregs from naïve T cells from peripheral blood is significantly impaired (22). Importantly, this impairment in Treg induction was not limited to the insulin-specific population, but was likewise observed for hemagglutinin-specific and polyclonal Treg induction, highlighting a broad defect in Treg induction (22). Furthermore, we were able to show that a reduction in the activation threshold of insulin-specific T cells during the onset of islet autoimmunity limits the possibility of subimmunogenic stimulation for efficient Treg induction (22). Apart from defects in Treg induction during islet autoimmunity, we likewise observed reduced Treg stability as indicated by increased DNA methylation of the conserved non-coding sequence 2 (CNS2) of the Foxp3 locus both in non-obese diabetic mice (NOD, mouse model for T1D) with islet autoimmunity as well as in children with overt T1D (23). The Foxp3 CNS2 is completely demethylated in stable Tregs, while its methylation leads to the loss of Foxp3 expression and the Treg phenotype (16). Importantly, this defect in Treg stability in NOD mice was observed already at a young age, shortly after weaning, indicating a possible causative role in disease development and progression as opposed to a mere consequence of the ongoing autoimmune process (23). The identified impairments in Treg induction and stability directly highlight the importance of considering the time point of administration of antigen-specific Treg inducing therapies. Our in vitro and ex vivo data suggest limitations in the efficacy of such treatments during the first years after development of islet autoimmunity. In addition, these findings strengthen the rationale of considering preventive strategies in genetically at-risk patients, before the onset of overt islet autoimmunity, for future antigen-specific Treg targeting in man. Accordingly, for T1D pilot results from the Pre-POINT study, the first study to administer daily oral insulin to children at risk to develop T1D, but before the start of the autoimmune reaction, resulted in enhanced frequencies of insulin-specific CD4 + T cells with regulatory features (24). These preliminary results are currently further investigated in the larger POINT study for efficacy (25). OPPORTUNITIES FOR ANTIGEN-SPECIFIC TREG THERAPY IN T1D The finding that Treg induction potential is significantly limited during onset of islet autoimmunity (22) highlights the concept that antigen-specific Treg induction in the presence of ongoing autoimmune activation will benefit from combinatorial immune targeting. Specifically, a combination with treatments that control aberrant immune activation while fostering Tregs will be critical in order to broaden the window of opportunity for Treg induction. miRNA Targeting to Foster Tregs in Islet Autoimmunity With the goal to understand mechanisms of impaired Treg induction, we focused on microRNAs (miRNAs). miRNAs are small non-coding RNAs that can sequence-specifically inhibit their target mRNAs. miRNAs usually target a multitude of different mRNAs, thereby regulating entire signaling pathways and complex cellular states, such as T cell activation, which makes them important targets for immunotherapies (26)(27)(28). Using miRNA sequencing of CD4 + T cells from peripheral blood of children with or without ongoing islet autoimmunity, we were able to identify several differentially regulated miRNAs and investigated three in more detail. Specifically, we focused on miRNAs that are predicted to target negative regulators of T cell activation and could therefore potentially inhibit Treg induction [reviewed in (29)(30)(31)]. We were able to demonstrate that miRNA92a-3p, a member of the miRNA17~92 cluster of miRNAs which was shown to induce lupus-like autoimmunity when overexpressed in mice (32), regulates human T follicular helper (TFH) cell differentiation (33). TFH cells are an integral part of the humoral immune response because of their ability to help B cells produce high-affinity antibodies [reviewed in (34)]. Accordingly, we found CXCR5 + insulin-specific TFH cell frequencies to be increased during onset of islet autoimmunity, which was directly correlated with miRNA92a-3p expression. Importantly, miRNA92a-3p targets negative regulators of T cell activation (e.g., PTEN, PHLPP2, FOXO1, CTLA4) and thereby simultaneously reduces Treg induction. Hence, inhibition of miRNA92a-3p enhanced while a miRNA92a-3p mimic reduced Treg induction (33). Furthermore, we investigated miRNA181a-5p, which has been demonstrated previously to regulate the signal strength of the TCR stimulus in developing T cells in the thymus (35). In line with excessive T cell activation observed during recent onset of islet autoimmunity, we found miRNA181a-5p to be specifically increased in CD4 + T cells from peripheral blood of children with recent activation of islet autoimmunity. Importantly, we found that higher expression of miRNA181a-5p enhances the expression of Nfat5 involving mechanisms of increased TCRand co-stimulation and that enhanced Nfat5 expression negatively affects Treg induction. Accordingly, inhibiting either miRNA181a-5p or Nfat5 augmented in vitro Treg induction, while inhibiting miRNA181a-5p in Nfat5 deficient T cells had no effect on Treg induction. These findings thereby highlight, that miRNA181a-5p mediated impairments in Treg induction are dependent on Nfat5 (22). In a third study we used high throughput sequencing of RNA isolated by crosslinking immunoprecipitation (HITS-CLIP) to show, that miRNA142-3p directly targets the methylcytosine deoxygenase Tet2. Importantly, TET proteins catalyze the first step of DNA demethylation and can thereby impact the epigenetic landscape (36). We were able to link increased expression of miRNA142-3p and resulting reduced Tet2 expression with impairments both in Treg induction as well as in Treg stability. Accordingly, the inhibition of miRNA142-3p was able to enhance Treg induction and enable induced Tregs to retain their Foxp3 expression to a higher degree than their untreated counterparts (23). Importantly, the inhibition of all three miRNAs or the downstream molecule Nfat5 directly in vivo in NOD mice with ongoing islet autoimmunity resulted in enhanced frequencies of Tregs accompanied by a reduction in the clinical disease score of the mice (22,23,33). These preliminary findings highlight the potential of miRNA-targeting as immunotherapy in T1D. Notably, a miRNA inhibitor is currently being investigated in a clinical trial as treatment for hepatitis C virus infections, thereby indicating the feasibility of miRNA modulation as immunotherapy (37). However, miRNAs are important regulators of cellular functions and can have distinct properties depending on the cell type. Therefore, the use of miRNA modulation as immunotherapy will be largely dependent on the cell type-specific targeting of the therapy. Specifically, the targeted delivery of miRNA inhibitors or mimics to immune cells or even immune cell subsets will greatly improve their use as immunotherapeutics. Here, it will be especially important to identify specific signatures for targeting defined subsets of immune cells, e.g., tissue-specific Tregs in the target organ, the pancreas. Targeting Tissue-Specific Tregs Apart from their canonical function of immune suppression, it is now well accepted that Tregs likewise take residence in tissues, where they play important roles in maintaining tissue homeostasis. These tissue Tregs were found to express specific gene signatures that are distinct from their circulating counterparts. Such tissue specific Treg gene signatures have been identified for Tregs from specific tissues, while they have been especially well studied for Tregs in the muscle and adipose tissue [reviewed in (38)]. Importantly, some signature genes are universal for tissue Tregs while others are more unique to Tregs from distinct tissues, e.g., the expression of the transcription factor PPARg on adipose-tissue residing Tregs (39). Apart from their gene expression signature, TCR sequencing of tissue resident Tregs has identified a distinct TCR repertoire and clonal expansion of certain TCRs, indicating the response to tissue-specific antigens (40). Importantly, treatment with the PPARg agonist pioglitazone, which is used for the treatment of type 2 diabetes because of its positive effects on metabolic health and local inflammation, was shown to expand adipose tissue Tregs, which supports the idea of targeting tissue-specific Tregs for the treatment of diseases (39). While Tregs in adipose tissue, muscle and the intestine have been studied extensively, only very little is known about Tregs in the pancreas. A study by the group of Christophe Benoist demonstrated that the diabetic lesions in NOD mice are enriched in CXCR3 + Tregs and that the expression of CXCR3 is dependent on Tbet. More importantly, they showed that the ablation of Tbet in Tregs accelerates the disease and overcomes the usually present sex-bias in NOD mice (41). Interestingly, Tbet + Tregs were also found in the lamina propria of patients with inflammatory bowel disease (42) as well as in patients with multiple sclerosis (43), where Tbet + Tregs were shown to contribute to the disease manifestation and being less suppressive (43). Importantly, the reduced suppressive activity was linked to the Ifng production of the Tregs which was not elevated in Tbet + Tregs from the pancreas (41). These findings highlight the possibility of specifically targeting defined Treg subsets within the pancreas for a more tailored immune modulation. However, all studies conducted so far on pancreas residing Tregs focused solely on NOD mice with ongoing insulitis. A more detailed understanding of pancreas residing Tregs and their contribution to immune homeostasis in the steady state will be crucial to advance immune modulation targeted to the pancreas. As one means to foster advancement in tissue-specific Treg targeting, recent years have seen tremendous progress in the simultaneous analysis of transcriptome, DNA methylation and accessibility, surface protein expression, perturbations, and receptor sequences on the single cell level. In this regard, computational strategies for integration of these complex data sets have enabled an unprecedented description of molecular behavior and identities of individual cells and therefore made it possible to move along to the next level of dissecting tissue Tregs (Figure 1). Defining Tissue-Specific Treg Characteristics Using Single-Cell Multi-Omics Integration Current single-cell multi-omics methods can measure up to four different omics types at once [reviewed in (44)(45)(46)], with the transcriptomics layer often used to connect between the different omics types. These techniques bear high potential for medical research to study individual heterogeneity, drug resistance, or disease progression at an unprecedented level (47,48). Especially, T cell focused immunological studies will benefit from recent developments as newly arising techniques can also simultaneously reconstruct TCR sequences and determine their specificities for a predefined set of epitopes (49-51). These methods have already greatly advanced our understanding of T cell responses in disease (50,(52)(53)(54)(55)(56), and lead to innovative analysis strategies such as the usage of TCR-sequence as natural barcodes to trace the cellular response pre-and post-antigen stimulation in vivo (57). With the rise of single-cell multi-omics approaches, new computational models have been developed that can jointly analyze such multi-modal data [reviewed in (46,58)]. Several studies used correlation-based approaches to jointly analyze copy number variations (59,60), DNA methylation (61-63), or protein abundance (64) and gene expression data. Recently, Schattgen et al. proposed an integration approach for TCR and gene expression data based on graph analysis defined on transcriptomic and TCR distances and could uncover known and novel associations between TCR sequences and transcriptomics phenotypes (65). Others used traditional statistical approaches (66), or advanced deep learning methods (67)(68)(69)(70)(71)(72)(73) to integrate multiple data sources at once to represent the joint information of all omics-layers. Along these lines, a recent method by Zhang et al. jointly integrated TCR and transcriptomic information using Bayesian clustering based on the TCR sequence and gene expression profile (74). Through this method Zhang et al. could show that joint TCR and gene expression analysis better separates T-cell specificity and captures the antigen binding efficiency gradient better than TCR-information alone (74). Similarly, we introduced a joint TCR-transcriptome deep learning model which additionally captured transcriptional gradients within clonotypes (73). Such methods could be used to further elucidate the relationship between the TCR sequence and transcriptional information of Tregs in autoimmune diseases. The identification of specific TCRs on tissue Tregs will help to define whether the migration of these cells to the tissue is likely antigen-driven and can also help to facilitate studies on tissue Tregs. In this regard, Diane Mathis group was able to analyze the ontogeny of visceral white adipose tissue (VAT)-residing Tregs by generating a mouse line transgenic for the TCR of an expanded VAT Treg clone (40). Additionally, the transfer of TCR transgenic Tregs has already been tested in preclinical studies for autoimmune diseases (75,76). These studies mostly rely on the use of effector T cell derived TCRs and it is not entirely clear how that could affect Treg function, migration, and fate after transfer. The identification of tissue-and Treg-specific TCRs in the steady state as well as differences to the disease state might enable us to design such transgenic Tregs more strategically and could therefore help to increase efficacy and safety of TCR transgenic Treg infusions. However, the identification of TCR sequences is only one side of the coin and a remaining bottleneck for T cell biology is the identification of the peptide-MHC ligands recognized by the identified TCRs. Here, recent advances have been made for experimental identification of epitopes recognized by orphan TCRs in a high-throughput screening of highly complex peptideencoding oligo pools presented by bar-coded T cell-cytokine capturing APCs (77). Additionally, machine learning has enabled novel computational approaches to predict TCR specificity. Sequence-based computational methods for TCR specificity analysis can be grouped into two categories: comparison and prediction. TCR comparison approaches impute antigen specificities by either allocating unknown TCRs to T-cell clusters or by assigning pairwise distance scores to TCR sequences with known antigen specificity. When several TCRs specific to the antigens of interest are known, these methods can be used to identify T cells with similar sequences likely to bind to the same antigen. The second category applies machine learning models to directly predict TCR binding to specific epitopes. Since these methods often additionally analyze the epitope sequence, they allow to predict specificity towards previously unknown antigens. TCR sequences with common epitope specificity carry statistically enriched motifs (78,79). Methods such as TCRdist (78) and GLIPH (79,80) compare such common motifs to identify TCR sequences with shared antigen specificities. Other methods were proposed differing in computational approach to match TCRs using sequence similarity (81) or numeric embeddings (82,83). While comparison-based methods can serve as a proxy for determining TCR-specificity, such methods fail for novel epitopes without known corresponding TCRs. Machine learning methods can alleviate these issues by learning general rules that guide the T-cell epitope interaction. De Neuter et al. provided a proof of concept by predicting specificity towards one of two B*08 restricted HIV-1 epitopes based on the TCR CDR3b sequence (84). Jurtz et al. additionally incorporated the peptide sequence but observed limited generalization to unknown epitopes (85). Subsequently, different models developed on varying datasets have been proposed with limited improvements (86). In recent years, deep learning methods were introduced (52, 87-89), of which some incorporate additional information such as CDR3a, CDR1 and CDR2 sequences, HLA type, and surface protein counts leading partially to increased prediction performances (52,86). These tools will potentially enable the identification of Tregs associated with disease-relevant antigens by predicting the specificity for large libraries of sequenced T cells. By limiting the number of candidates, for which specificity needs to be tested, the time and cost for identifying disease-relevant Tregs will be significantly reduced. However, due to different evaluation methodologies and different datasets, these methods often cannot directly be compared. Therefore, it remains yet to be determined, which model to choose, and to what degree computational tools can be already used for the development of targeted immunotherapies. It is apparent though, that the use of multi-omics techniques for the deep characterization of tissuespecific Tregs can critically contribute to the development and advancement of Treg-based immunotherapies. TCR transgenic Tregs migrate to the site of immune activation and therefore will facilitate the development of effective and safe therapies. Additionally, identification of surface markers specific to tissue-residing Tregs will enable targeted delivery of therapeutics, e.g., miRNA inhibitors or mimics, to foster Tregs specifically at the site of the autoimmune attack. CONCLUSION While advances have been made for antigen-specific Treg inducing therapies e.g. to treat patients with severe peanut allergies, the success of such therapies in autoimmune T1D is still limited. A broad impairment in Treg induction in children during onset of islet autoimmunity highlights the necessity of combinatorial strategies to foster Tregs in order to open the window of opportunity for antigen-specific Treg therapies. miRNA-targeting offers the opportunity to improve Treg induction and stability in T1D. However new strategies to specifically modify miRNAs in specific cell types are needed. Identifying key signatures and characteristics of Tregs residing in the pancreas, the target organ of the disease, will be important to target therapies more specifically to those cells that are directly involved in the disease development and progression. Major advances in the use of single-cell multi-omics integration together with machine learning approaches for TCR specificity prediction have paved the way for a detailed description of individual cells from different tissues and will therefore help to bring antigen-specific Treg therapy to the next level. AUTHOR CONTRIBUTIONS IS and FD reviewed the literature and wrote the manuscript. CD and BS reviewed the literature and contributed to the conceptualization of the manuscript. All authors contributed to the article and approved the submitted version.
2021-07-22T13:20:16.272Z
2021-07-22T00:00:00.000
{ "year": 2021, "sha1": "3adf8cbdfbfed4e38b75f2ef2c87b4199d60c4c9", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2021.712870/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3adf8cbdfbfed4e38b75f2ef2c87b4199d60c4c9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
231867412
pes2o/s2orc
v3-fos-license
Impact of the Simulated Gastric Digestion Methodology on the In Vitro Intestinal Proteolysis and Lipolysis of Emulsion Gels The aim of this work was to study the impact of the methodology of in vitro gastric digestion (i.e., in terms of motility exerted and presence of gastric emptying) and gel structure on the degree of intestinal proteolysis and lipolysis of emulsion gels stabilized by whey protein isolate. Emulsions were prepared at pH 4.0 and 7.0 using two homogenization pressures (500 and 1000 bar) and then the emulsions were gelled by heat treatment. These gels were characterized in terms of texture analysis, and then were subjected to one of the following gastric digestion methods: in vitro mechanical gastric system (IMGS) or in vitro gastric digestion in a stirred beaker (SBg). After gastric digestion, the samples were subjected to in vitro intestinal digestion in a stirred beaker (SBi). Hardness, cohesiveness, and chewiness were significantly higher in gels at pH 7.0. The degree of proteolysis was higher in samples digested by IMGS–SBi (7–21%) than SBg–SBi (3–5%), regardless of the gel’s pH. For SBg–SBi, the degree of proteolysis was not affected by pH, but when operating the IMGS, higher hydrolysis values were obtained for gels at pH 7.0 (15–21%) than pH 4.0 (7–13%). Additionally, the percentage of free fatty acids (%FFA) released was reduced by 47.9% in samples digested in the IMGS–SBi. For the methodology SBg–SBi, the %FFA was not affected by the pH, but in the IMGS, higher values were obtained for gels at pH 4.0 (28–30%) than pH 7.0 (15–19%). Our findings demonstrate the importance of choosing representative methods to simulate food digestion in the human gastrointestinal tract and their subsequent impact on nutrient bioaccessibility. Introduction Nowadays, the main nutritional problems of the world population, such as obesity or undernutrition, could be overcome by the adoption of different technological solutions [1]. These nutritional problems are directly linked to food intake, but it is not only the amount of food consumed that matters. Interactions between proteins, lipids and carbohydrates form the basis of the food matrix facing the digestive system. After digestion in the gastrointestinal tract (GIT), the release of the building blocks of macronutrients (e.g., amino acids, free fatty acids and/or glucose) is dependent on the food matrix structure and the complex processes occurring in the GIT [2]. This explains why, over the years, there has been increasing interest in understanding the behavior of food through the GIT [3]. Food digestion in humans or animals has traditionally been studied with in vivo approaches, but these approaches are expensive, invasive, and have ethical restrictions [3,4]. Consequently, various in vitro digestion systems that attempt to simulate the dynamic, physical and biochemical complexity of the GIT, and particularly systems that mimic the where H and H 0 (m) are the final and initial height after deformation, respectively. The overall stress acting on the sample during compression was expressed as the true normal stress, which is the normal force to the cylinder cross section divided by the initial area of the sample [28]: where σ t is the true normal stress (Pa), F is the normal force acting over the gel (N) and A 0 is the cross-sectional area of the gel (m 2 ). In Vitro Digestion Assays of Emulsion Gels The standardized digestion method proposed by the COST Infogest network [29,30] was used as the basis to perform the in vitro digestion assays, with some modifications. Three phases of in vitro digestion were simulated: oral, gastric and intestinal phase. For gastric digestion, two methodologies were used: (i) IMGS, system composed by a human stomach model and a mechanical system with realistic peristalsis) [13], or (ii) SBg, gastric stirred beaker operated at 150 rpm. The impact of gastric emptying on the in vitro intestinal lipolysis and proteolysis of emulsion gels was studied. After gastric digestion using the IMGS or SBg, the chyme obtained was subjected to intestinal digestion in a double-jacketed glass beaker under continuous stirring (called SBi). The gastric and intestinal digestion times were of 90 and 120 min, respectively. In Vitro Oral Digestion In total, 150 g of emulsion gel (~150 mL; pH 4.0 or 7.0) was mixed with 150 mL of SSF (divided into 5 charges, 30 g gel + 30 mL SSF each one), following the mix 1:1 with SSF-containing amylase (75 U/mL) [29,30]. Each charge of this mixture (emulsion gel and SSF) was deposited in dialysis bags. After that, the bags were sealed tightly with adhesive tape, and then were chewed by a human volunteer. As oral residence time depends on the nature and textural characteristics of the material [12], the crushing times were 40 and 60 s for the samples at pH 4.0 and pH 7.0, respectively. In Vitro Gastric Digestion As previously mentioned, these assays were carried out in the IMGS or SBg. In both cases, 300 mL of bolus from the in vitro oral digestion was mixed with 300 mL SGF, following the mix 1:1 with SGF-containing pepsin (2000 U/mL) [29,30]. For each methodology (IMGS or SBg) and type of sample (emulsion gel at pH 4.0 and 7.0), a gastric pH curve was performed (Figure 1), since it is known that when a food enters the human stomach, a buffering effect is induced by the food, after which acid secretion produces an exponential decrease in pH. This pH change has been described for in vivo [33,34] and in vitro [5,35] studies, which were used as a reference to build up a programmed pH curve. The pH curve developed for the IMGS considered the gastric-emptying process. The gastric digestion procedures applied were: SBg. The bolus was mixed with SGF (37 • C; pH 3.0) and the pH of this mixture was immediately controlled by a pH-Stat automatic titration unit (Metrohm, 902 Titrando, Herisau, Switzerland). In its software (Tiamo 2.4), the parameters were adjusted in order to obtain the expected gastric pH curve by adding defined volumes of 0.5 N HCl solution at different time intervals, until completing the test period (90 min). IMGS. The bolus was deposited in the simulated stomach of the IMGS, and the SGF was added at a rate of 3.33 mL/min by pumping (Surefusion TM , Nipro, Osaka, Japan). The rate of flow of gastric juice was representative of that found in in vivo studies [9,34,36]. Simultaneously, the IMGS operation started, exerting the peristalsis in the stomach at a frequency of 3 cycles/min, physiological value of the human stomach [37,38]. For simulation of the gastric pH curve, the same method used for SBg was applied ( Figure 1). After the first 15 min, the gastric emptying valve placed at the pyloric section of the simulated stomach was opened. The gastric emptying was performed intermittently every 10 min using a peristaltic pump (Fisherbrand™ 13-876-2, Fisher Scientific, Suwanee, GA, USA), until completing the whole digestion period (120 min), following the phenomenology reported by in vivo studies [12]. The gastric chyme was transferred at a rate of 10 mL/min to the intestinal phase [36,39], with previous adjustment of pH to 7.0. At the pyloric zone, a membrane (pore size: 2 mm) was incorporated to control the particle size of the chyme passing to intestinal digestion [5]. During IMGS digestion, the overall mechanical force exerted on the emulsion gels samples at pH 4.0 and 7.0 was measured as reported previously [13], obtaining values between the ranges of 0.2-1.2 N and 0.2-1.5 N, respectively. These results are in line with those found in an adult human stomach [37,40]. In Vitro Intestinal Digestion The intestinal phase was carried out in a double-jacketed beaker at 37 • C subjected to continuous agitation at 150 rpm (SBi). After digesting in the SBg, the gastric chyme obtained (650 mL; pH~2.0 as shown in Figure 1) was adjusted to pH 7.0 with 1 N NaOH solution. Immediately after that, the chyme was transferred to the SBi and then mixed with 650 mL of SIF (37 • C; pH 7.0), following the mix 1:1 with SIF-containing enzymes [29,30]. For the studies of intestinal proteolysis, the SIF was only composed by trypsin (100 U/mL), pancreatin based on trypsin activity at 100 U/mL, and chymotrypsin (25 U/mL); however, for the intestinal lipolysis assays, the SIF also contained lipase (2000 U/mL) [29,30]. When digesting in the IMGS, the gastric chyme at pH~2.0 (see Figure 1) was emptied at 10 mL/min into the SBi [36,39], with previous adjustment of pH to 7.0 with 1 N NaOH solution as mentioned previously. The neutralized chyme was then mixed with SIF (37 • C; Foods 2021, 10, 321 6 of 19 pH 7.0) pumped (Surefusion TM , Nipro, Japan) at a rate of 5.4 mL/min into the SBi, until it reached the total volume (650 mL). The flow rate of the simulated intestinal fluid is in accordance with previous studies reporting values of intestinal secretion in the small intestine ranging from 0.3 to 20.8 mL/min, measured from human volunteers [41][42][43][44]. During intestinal digestion, and after neutralizing the gastric chyme, sample pH was monitored by using an automatic titration unit (Metrohm, 902 Titrando, Herisau, Switzerland) and maintained at a value of 7.0 by adding 0.7 N NaOH solution during the 120 min of digestion. The volume of NaOH solution added to the digested mixture was recorded and used to calculate the intestinal hydrolysis of proteins and lipids. After hydrolyzing a peptide bond, one carboxylic group and one α-amino group are produced. During the in vitro digestion at pH 7.0, carboxylic groups release their proton, which can be titrated by NaOH solution using a pH-stat automatic titration [45]. The volume of NaOH solution added can be converted into the degree of intestinal protein hydrolysis (DH), as follows [45,46]: where V NaOH proteolysis (t) is the volume of NaOH consumed during a t protein digestion time (L), N NaOH is the NaOH normality (eq/L), m protein is the initial protein mass in the sample (g), h tot is the number of peptide bonds in the protein substrate (8.8 meqv/g for whey proteins [47]), and α(RNH 2 ) is the mean degree of dissociation of α amino groups, which can be calculated as follows: α(RNH 2 ) = 10 (pH−pK) 1+ 10 (pH−pK) As pK is dependent on the working temperature (37 • C, physiological temperature) and pH (7.0), its value for this study was 7.4, and therefore α(RNH 2 ) corresponds to 0.285 [46,48]. Intestinal Lipolysis Each triacylglycerol (TAG) molecule generates two free fatty acids (FFA) when fully digested. The FFA released from a sample can be calculated from the total amount of TAG present in the original sample, according to [13]: where V NaOH lipolysis (t) is the volume of NaOH solution required to neutralize the FFA produced at lipid digestion time t (L), M NaOH is the molarity of the NaOH solution (mol/L), MM lipid is the molecular mass of the TAG oil (g/mol), and m lipid is the total mass of TAG oil initially present in the sample (g). When lipolysis was analyzed, the final %FFA was obtained by subtracting the respective volume of NaOH solution used in the proteolysis calculation. The times used to estimate the FFA released were the same as those from the degree of intestinal protein hydrolysis. Control runs without enzymes were also performed and subtracted from the reported values. Statistical Analysis of Data All experiments were carried out in triplicate using freshly prepared samples. Results are presented as mean values with standard deviations. Analysis of variance was carried out when required using Statgraphic Centurion XVI (version 16.1, Statistical Graphics Corporation, Rockville, MD, USA), including multiple range tests (p < 0.05) for separation of the least square means. Characterization of Emulsions and Emulsion Gels The oil droplet sizes and the size distributions for the four oil-in-water (O/W) emulsions stabilized by WPI are presented in Table 2 and Figure 2. Both particle size and Pdi of the emulsions were not significantly affected (p > 0.05) by the homogenization pressure, reaching values of 302-328 nm and~0.19 for emulsions at pH 4.0, and~275 nm and 0.10-0.17 for emulsions at pH 7.0, respectively. The decrease in oil droplet diameter by increasing the homogenization pressure (500 bar vs. 1000 bar) was only observed in pH 4.0 emulsions (328 nm vs. 302 nm, respectively), whereas in emulsions at pH 7.0, the droplet diameter ranged from 260 nm to 271 nm, respectively. The latter can be attributed to the limited amount of surfactant present to stabilize the droplets formed [49]. It is probable that disruptive forces would be able to generate smaller drops at 1000 bar, but even when there is a sufficient amount of surfactant for emulsion formation, it did not adsorb quickly enough during homogenization. Under turbulent flow conditions, as expected in high-pressure homogenization, newly formed droplets collide, which may lead to rapid recoalescence depending on the extent to which droplets are readily covered by emulsifier molecules [50]. Very short timescales are involved in droplet coverage, and whey proteins were presumably unable to completely stabilize small droplets at 1000 bar. All experiments were carried out in triplicate using freshly prepared samples. Results are presented as mean values with standard deviations. Analysis of variance was carried out when required using Statgraphic Centurion XVI (version 16.1, Statistical Graphics Corporation, Rockville, MD, USA), including multiple range tests (p < 0.05) for separation of the least square means. Characterization of Emulsions and Emulsion Gels The oil droplet sizes and the size distributions for the four oil-in-water (O/W) emulsions stabilized by WPI are presented in Table 2 and Figure 2. Both particle size and Pd of the emulsions were not significantly affected (p > 0.05) by the homogenization pressure reaching values of 302-328 nm and ~0.19 for emulsions at pH 4.0, and ~275 nm and 0.10-0.17 for emulsions at pH 7.0, respectively. The decrease in oil droplet diameter by increasing the homogenization pressure (500 bar vs. 1000 bar) was only observed in pH 4.0 emulsions (328 nm vs. 302 nm, respectively), whereas in emulsions at pH 7.0, the droplet diameter ranged from 260 nm to 271 nm, respectively. The latter can be attributed to the limited amount of surfactant present to stabilize the droplets formed [49]. It is probable that disruptive forces would be able to generate smaller drops at 1000 bar, but even when there is a sufficient amount of surfactant for emulsion formation, it did not adsorb quickly enough during homogenization. Under turbulent flow conditions, as expected in highpressure homogenization, newly formed droplets collide, which may lead to rapid recoalescence depending on the extent to which droplets are readily covered by emulsifier molecules [50]. Very short timescales are involved in droplet coverage, and whey proteins were presumably unable to completely stabilize small droplets at 1000 bar. Unlike the pressure factor, the pH of the emulsions affected the particle size and Pdi, both being significantly lower (p < 0.05) at pH 7.0. This can be explained by the lower stability of whey proteins in dispersion at pH 4.0, since when they are in an environment close to its isoelectric point (pH 5.2), they tend to aggregate, giving way to destabilization phenomena (e.g., coalescence or flocculation). Thus, oil droplets would form larger droplets or aggregates in the dispersion [51,52]. All the samples presented Pdi < 0.20 (Table 2), which indicates monodisperse size distributions ( Figure 2) [53]. With respect to the textural characterization of the emulsion gels, results from the TPA test (Table 3) indicate that the hardness and cohesiveness were affected only by pH. In the case of hardness, the values were~10 N for gels at pH 4.0, and~12 N for gels at pH 7.0. In turn, gels at pH 4.0 and 7.0 presented values of cohesiveness of~0.45 and 0.87, respectively. Both properties were significantly higher (p < 0.05) for pH 7.0 emulsion gels. It is known that fine-stranded WPI gels are formed at pH 7.0, which gives them a more rigid conformation generated by disulfide bonds (covalent interactions) [54][55][56]. This would explain the harder and more cohesive structure of these gels. In addition, it has been indicated that an increase in rigidity of emulsion gels prepared from WPI-stabilized emulsions can be obtained by lowering the emulsion oil droplet size [14]. At pH 4.0, whey proteins tend to aggregate and a coarse (particulate) gel structure with fewer covalent binding points is generated [57], decreasing the hardness and cohesiveness of these gels. Table 3 shows a congruence between the TPA results and the stress at break values. The pH of the emulsion gels significantly affected (p < 0.05) the stress at break, where lower compression stresses (21.7-23.4 kPa) were needed to break the pH 4.0 samples. For pH 7.0 emulsion gels, higher stresses were required (76.2-95.0 kPa) to deform and break up the sample, which can be associated with a longer period of chewing during the in vitro oral digestion (40 s for gels at pH 4.0 vs. 60 s for gels at pH 7.0, as described previously). Accordingly, the chewiness of the emulsion gels was significantly different (p < 0.05) when changing the pH from 4.0 to 7.0 and by varying the homogenization pressure. Emulsion gels at pH 7.0 had a higher hardness and were more elastic; hence, a greater force (8.5 N/500 bar vs. 7.4 N/1000 bar) was needed to chew the food and turn it into a bolus, so that there is a proportional relationship between both parameters [58]. The deformation of the samples was significantly affected (p < 0.05) only by pH, resulting in a greater deformation before rupture for samples at pH 7.0 due to its structural conformation induced by disulfide bonds, generating stronger and more elastic gels [56]. In Vitro Digestibility of Emulsion Gels The impact of the type of gastric digestion (IMGS vs. SBg) of emulsion gels elaborated at different pH and homogenization pressures on the in vitro intestinal proteolysis (%DH) and lipolysis (%FFA) was evaluated. pH curves were constructed for both methodologies of gastric digestion to simulate more realistically food digestion in the stomach. Gastric pH Curves The gastric pH curves obtained during digestion of emulsion gels in the IMGS and SBg systems are shown in Figure 1. When beginning the digestion, the gastric pH was~3.8 and 6.5 for gels fabricated at pH 4.0 and 7.0, respectively. Later, a decrease in pH of the gastric content digested using the IMGS and SBg methodologies was observed. Both gastric digestion systems showed pH changes similar to those reported for in vitro [33,34] and in vivo studies [5,35], with pH values decreasing to~2.0 after 90 min of digestion. From Figure 1, it is evident that the mixing process in the SBg is faster, because smoother pH curves were obtained in comparison with the noisy pH curves observed when digesting in the IMGS. In the SBg system, continuous agitation was used (similar to a perfectly stirred reactor), which promoted a more homogeneous mixture of the gastric content, independent of gel size resulting from the oral phase by crushing or chewing. In fact, pH 4.0 gels subjected to oral digestion resulted in a paste-like consistency bolus with respect to the particulate state found for gels at pH 7.0 ( Figure 3). Regardless of the physical state of the gels after oral digestion, there are no drastic variations in pH during digestion given the effect of the gastric mixing or homogenization. For the IMGS, it should be noted that the pH curves were performed applying realistic peristalsis and gastric emptying. Thus, the mixture of the gastric content (digested gel, SFG and HCl solution) is less homogeneous than in the SBg, which leads to oscillations in the gastric pH curves. These oscillations were less marked for pH 4.0 gels, since the respective bolus formed after oral digestion presented a paste-like consistency (Figure 3), which facilitates the passage of chyme into the intestinal phase (SBi) by action of the gastric emptying process. Impact of the Type of In Vitro Gastric Digestion of Emulsion Gels on the Degree of Intestinal Proteolysis The kinetics of proteolysis of the emulsion gels during their in vitro intestinal digestion (SBi system) are shown in Figure 4. The nomenclature SBg-SBi refers to the gastric and intestinal digestion assays performed in a stirred beaker, whereas IMGS-SBi refers to the assays of gastric digestion in the IMGS and subsequent intestinal digestion in a stirred beaker. As is clear from Figure 4, the kinetics of intestinal proteolysis of the emulsion gels are markedly different among methodologies. While in the SBg-SBi systems these kinetics are very similar for all samples and do not present a lag phase, the kinetics obtained by the methodology IMGS-SBi showed a difference in shape according to the pH of the studied sample, and showed the existence of a lag phase whose extent was dependent on pH sample. After gastric digestion, the total chyme obtained is transferred to the intestinal phase, which results in the absence of a lag phase for the samples subjected to digestion SBg-SBi. Accordingly, all the substrate (undigested protein or proteolytic fragments) is available to be hydrolyzed by trypsin and chymotrypsin, and consequently immediate intestinal proteolysis can occur. On the contrary, the lag phase observed for samples assayed with the methodology IMGS-SBi, where the substrate is not hydrolyzed immediately, can be explained by the application of the gastric-emptying process. When the gastric emptying begins, the chyme transported to the intestinal phase is a liquid that contains little substrate available because in the first minutes of gastric digestion, the disintegration of the gels is still reduced. Whether it is hydrolyzed by pepsin or not, the amount of protein that passes to the intestinal phase is low, resulting in the presence of this lag phase. In fact, it has been demonstrated that proteins behave differently at different digestive phases, such as gastric emptying rate [59]. Alternatively, significantly higher percentages of proteolysis (p < 0.05) were obtained using IMGS-SBi, with values ranging from~7.0% to 21.5% with respect to SBg-SBi (Table 4). This may indicate an underestimation of the hydrolysis degrees reported when studying intestinal proteolysis using digestion systems that operate as batch perfectly agitated vessels. From Figure 4, it can be seen that all the samples tested using SBg-SBi quickly increase their digestion rate, but this rate decreases in a short time until reaching a plateau period in which the generated proteolytic products remain constant. Notably, low final extent of proteolysis was found for all samples, with values ranging between 2.4% and 4.8% (Table 4). The rapid decrease in the rate of protein digestion can be caused by the possible inhibition of enzymes due to the effect of the substrate or hydrolytic products accumulated in the system during digestion time [60,61]. Thus, in this study it is possible to infer that the low proteolysis values reached for the SBg-SBi system could be due to trypsin and/or chymotrypsin inhibition during the intestinal digestion of the samples, where the enzymes lose their activity and therefore no reaction products continue to be generated after~10 min of substrate/enzyme contact. The above is possible because, as already mentioned for this system, after gastric digestion all the chyme is subjected to intestinal digestion. For this reason, the enzyme will be in contact with all the available substrates from the beginning, which induces its possible inhibition. In a study of enzymatic hydrolysis of lactalbumin, González-Tello et al. [62] demonstrated that there was inhibition of different enzymes by increasing the amount of substrate for the reaction. They concluded that the hydrolysis of whey protein can be explained by an instantaneous and irreversible union of the enzyme with an inhibitor present in the substrate or generated by the instantaneous hydrolysis of some minor component of the protein to be hydrolyzed. This can be corroborated since this phenomenon does not occur in the IMGS-SBi where the substrate is gradually released to the intestinal phase, which is similar to what occurs during an in vivo digestion. Here, we can see how the rate of hydrolytic product release increases according to the amount of substrate that is being transferred and, therefore, the inhibition previously described for the SBg-SBi system is not present when the IMGS-SBi system is used for in vitro digestion assays. Finally, the type of gastric motility exerted in an in vitro digestion process is fundamental for obtaining values of proteolysis much more representative than using conventional systems (stirred beaker) that may lead to the underestimation of the digestion rate, due to certain phenomena such as the inhibition of proteolytic enzymes. Influence of the pH of the Emulsion Gels on Intestinal Proteolysis For the SBg-SBi system, those samples at pH 4.0 showed a significantly higher percentage of hydrolysis (p < 0.05) (3.0% and 4.8% at 500 and 1000 bar, respectively) than those at pH 7.0 (2.4%/500 bar and 3.9%/1000 bar) ( Table 4). As previously discussed, this can be explained as a consequence of the fact that in these latter samples the gel structure formed is more cohesive and of greater hardness, and therefore after the gastric phase, larger gel particles are still observed ( Figure 5). This agrees with the results obtained by Guo and co-workers [21,63], where a higher degree of gel fragmentation was found for "soft" emulsion gels in comparison with "hard" emulsion gels after gastric digestion. In consequence, the attack of hydrolytic enzymes on the protein substrate during the intestinal digestion will be more difficult for gels formed at pH 7.0, resulting in less hydrolysis. Furthermore, possible substrate inhibition can occur as it has already been described. When analyzing the IMGS-SBi, a lag phase for all samples was observed, with times of 25 and 40 min for gels at pH 4.0 and 7.0, respectively. As in the gastric digestion, the gels at pH 7.0 have a higher time of disintegration induced by peristaltic forces, so the amount of protein that passes into the intestinal digestion at the beginning of gastric emptying is lower and, therefore, the time at which the proteolysis begins is later than in samples at pH 4.0. For the IMGS-SBi system, significantly lower percentages of protein hydrolysis (p < 0.05) were obtained in the samples at pH 4.0 (7.0% and 13.4% at 500 and 1000 bar, respectively), in comparison with samples at pH 7.0 (15.0%/500 bar and 21.5%/1000 bar) ( Table 4). This difference can be attributed to the structure of the emulsion gels at pH 7.0, since compared to smaller aggregates, the larger aggregates provide less cleavage sites for digestive enzymes due to the lower total area, thus showing lower degradation rates [64]. In addition, the tendency of the digestion kinetics differed with the pH of the samples, where at pH 4.0 a plateau phase was reached at the end of the digestion time, but not so in samples at pH 7.0 where the highest values of proteolysis were obtained without reaching a plateau. This is also related to the fact that after gastric digestion, the pH 7.0 samples maintained small fragments of gel that went towards intestinal digestion, during which they disintegrate and release a greater amount of substrate at 80 min. During this time, the rate of digestion increases without reaching a plateau and, therefore, said samples will need more time for a complete digestion. Figure 6 shows a representative image of the structural and physical state of the chyme formed by digesting emulsion gels at pH 7.0 in the IMGS and emptied into the intestinal vessel for subsequent digestion. Effect of the Type of In Vitro Gastric Digestion of Emulsion Gels on the Intestinal Lipolysis The intestinal lipolysis of the emulsion gels was measured by the percentage of free fatty acids (%FFA) released. This percentage was higher for gels digested in the SBg-SBi system than IMGS-SBi, as shown in Figure 7. The final extent of FFA released from gels at 500 bar using SBg-SBi was 43.9% and 42.6% for samples at pH 4.0 and 7.0, respectively, whereas these values were 28.2% and 19.4% when using IMGS-SBi (Table 4). Similar values were obtained for emulsion gels at 1000 bar. These findings reflect the impact that gastric motility and emptying have in food digestion, particularly for solid and semisolid matrices. Here, the type of motility exerted by the IMGS does not induce such a homogeneous chyme, since the peristaltic movements generated in a more realistic way retard the breakdown of emulsion gels (semisolid food matrices) during gastric digestion. This is because, in the human stomach, the maximum destructive force is~1.9 N; therefore, it is difficult for this one to fragment the food particles with greater hardness into smaller pieces [63]. Then, under these conditions, the size of the pieces of emulsion gel that pass to the intestinal digestion is larger than in the conditions granted by the SBg system, where there is a more homogeneous mixture, so that there is limited access of the enzymes to the surface of oil droplets, decreasing the rate of the release of fatty acids in these samples. Since the rate of lipolysis is controlled by the interfacial area available for the binding of lipase and pancreatin and not by the amount of these [65], lipid digestion can be modulated by designing the structure of the gel around the oil droplets [66]. In addition, the shape of the lipolysis kinetics obtained (Figure 7) agrees with those found for intestinal proteolysis (Figure 4), when comparing both gastric digestion methodologies (SBg vs. IMGS) and subsequent SBi. Thus, when digesting in the IMGS, which involves a mechanism of gastric emptying, the amount of chyme that passes into the intestinal phase is limited, so that the enzymes have less substrate to hydrolyze, resulting in a low rate of FFA release. The latter causes the plateau not to be generated during the 2 h of digestion, requiring a longer time for a complete digestion. This last resembles the kinetics of digestion studied by Barros and co-workers [13], where the lipolysis of WPI-stabilized O/W emulsions was analyzed, which, although it was higher in the IMGS compared to the SBg system, it did not achieve a definitive plateau at the end of the digestion. This has been evidenced by in vivo studies of solid foods (e.g., rice), where at 3 h of digestion, 60% of the dry solids were obtained in the gastric emptying, so a longer time is needed for a gastric digestion of all the content [5]. The same phenomenon was observed in previous in vitro digestion studies of emulsions based on WPI and soybean oil. On the one hand, in the case of emulsions with palm, sunflower and linseed oil, it is only after 200 min of intestinal digestion that a decrease in the rate of fatty acids release is observed [67,68]. Lipolysis kinetics of all samples digested in the SBg-SBi system do not present a lag phase during intestinal digestion. This indicates that an enzymatic attack occurs immediately after starting the intestinal phase. This could be justified by the existence of proteolysis during gastric digestion, where the whey proteins that stabilized the O/W emulsion were exposed to an enzymatic attack by pepsin. Consequently, the stabilizing action of WPI in the emulsion could be altered during gastric digestion, which impacted the subsequent intestinal lipolysis by facilitating the access of lipolytic enzymes to their substrate, attacking them instantaneously. Influence of pH of Emulsion Gels on Lipid Digestion The final extent of FFA generated after intestinal digestion in the SBg-SBi system did not show significant differences (p > 0.05) with the pH of the gels, reaching a value of 43.0% FFA (Figure 7; Table 4). When testing in SBg-SBi, a linear rate of lipolysis during the first 65 min of digestion of emulsion gels at pH 4.0 and 500 bar is observed, always with a released %FFA that is lower than the pH 7.0 s gel. After that time, a plateau is reached, which for gels at pH 7.0 this equilibrium stage occurs at 45 min. The result obtained for emulsion gels at pH 4.0/500 bar shows that during gastric digestion, this sample behaved similar to a liquid emulsion ( Figure 5), even when it entered the oral phase as an emulsion gel. Therefore, this sample forms a homogeneous and stable matrix, and although there is a greater interfacial area, pepsin action is difficult. This is corroborated with the results of proteolysis for this sample, where less protein hydrolysis was observed. This implies a greater amount of non-hydrolyzed proteins, thus allowing the lipids to be less exposed to an enzymatic attack and also the proteolytic products of the protein to maintain their potential action as a surfactant. In the case of the kinetics obtained in the IMGS-SBi, it is observed that the four samples under study followed a similar trend during intestinal digestion, presenting a lag phase until approximately 5 min. This is because when gastric emptying in the IMGS begins, the chyme transported to SBi is a liquid with little substrate. Given that during the first minutes the gel disintegration is slow, the available amount of lipids for intestinal hydrolysis is consequently low, resulting in the presence of this lag phase. The digestion trends for these samples did not show significant differences (p > 0.05) until the 80 min of digestion, at which time those samples of pH 4.0 began to increase their rate of digestion with respect to the samples of pH 7.0. The final %FFA obtained in IMGS-SBi is considerably affected by pH, with values of 29.1% and 17.2% for pH 4.0 and 7.0, respectively (Table 4). This difference is due to how the structure of gel was affected by the pH during its elaboration, since at a higher pH, the internal structure of the gel is held together by disulfide bonds, which makes it less susceptible to subsequent mechanical and enzymatic action [54,55]. It is known that in the presence of pepsin, the disintegration rate of a soft gel is higher than a hard gel [63]. Consequently, it can be seen that gels made at pH 7.0 presented values of hardness, cohesiveness and chewiness significantly higher than pH 4.0 gels. Due to these characteristics, during gastric digestion in the IMGS, the disintegration of the structure by the peristaltic action becomes more difficult. As they have a more cohesive internal structure, the enzymatic attack by pepsin is difficult, so that the pieces of gel that finally pass through the pylorus to the intestinal phase are larger. The latter has an impact on intestinal digestion, and although the enzymatic attack of lipase and pancreatin is almost immediate, the rate of fatty acids release is low, achieving a lower final released %FFA than gels at pH 4.0. Finally, the type of gastric motility and the incorporation of the gastric emptying process during digestion of emulsion gels severely impacts both the kinetics of released FFA and their final extent, which is reflected in the reduction of~47.9% between the fatty acids released by the IMGS-SBi compared to those obtained by the SBg-SBi system. Conclusions In this work, we studied the impact of the methodology of in vitro gastric digestion and gel structure on the in vitro intestinal proteolysis and lipolysis of emulsion gels. For this, two systems of in vitro gastric digestion were used: a system with realistic gastric peristalsis and emptying (IMGS) and a conventional system based on a stirred beaker operated at constant speed (SBg). After gastric digestion, assays of in vitro intestinal digestion were carried out in a stirred beaker (SBi). Emulsion gels stabilized by WPI were fabricated at different pH (4.0 and 7.0) and homogenization pressures (500 and 1000 bar). The textural characteristics of the gels were determined by the pH of their fabrication, with values of hardness and cohesiveness of~10 N and 0.45, and 12 N and 0.87 for gels at pH 4.0 and 7.0, respectively. These textural characteristics significantly impacted how the emulsion gels were digested later. Both intestinal proteolysis and lipolysis of the emulsion gels result in a great difference when using in vitro digestion systems with different operating characteristics. The kinetics of intestinal proteolysis of the gels were markedly different among methodologies. When using the system SBg-SBi, the kinetics were similar for all samples and did not present a lag phase; however, the kinetics obtained by the methodology IMGS-SBi showed a difference in shape according to the pH of the sample, and a lag phase whose extent was dependent on pH sample. In addition, significantly higher percentages of proteolysis (p < 0.05) were obtained using IMGS-SBi, with values ranging from~7.0% to 21.5% with respect to SBg-SBi (~2.4-4.8%). On the other hand, the %FFA released during the intestinal lipolysis of the gels was higher for gels digested in stirred beakers, in comparison with the system IMGS-SBi. The final extent of FFA released from gels at 500 bar using SBg-SBi was 43.9% and 42.6% for samples at pH 4.0 and 7.0, respectively, whereas these values were 28.2% and 19.4% when using IMGS-SBi. Similar values were obtained for emulsion gels at 1000 bar. These findings reflect the impact that gastric motility and emptying have in food digestion, particularly for solid and semisolid matrices. It is important to understand that the results of in vitro digestion will not only depend on an initial characterization of the food matrix that will be digested (regardless of whether it is in the oral, gastric and intestinal phase), but also on how that matrix disintegrates in each stage of digestion, since the way in which this action occurs in one phase will influence the digestion behavior in the next one. In consequence, the in vitro evaluation of nutrient release should not only be conducted considering the system to be used, but also the structural changes throughout the digestion, thus being able to contribute to the rational design of foods with improved nutritional properties. Data Availability Statement: The data presented in this study are available on request from the corresponding author (Elizabeth Troncoso).
2021-02-11T06:16:34.680Z
2021-02-01T00:00:00.000
{ "year": 2021, "sha1": "c297877fcbbac72bce5f84a028574d07b354be00", "oa_license": "CCBY", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7913480", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "8acdbda5ce14e7e659d18d142e63d2602a933c8a", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
266023598
pes2o/s2orc
v3-fos-license
Congenital intrarenal arteriovenous malformation presenting with gross hematuria after endoscopic intervention: a case report Introduction Although diagnostic ureterorenoscopy is a minimally invasive and effective diagnostic procedure, it has the potential for significant postoperative complications. We report the first case in the literature of intrarenal arteriovenous fistulas causing hemodynamic effective anemia 4 days after ureterorenoscopic biopsy. Case presentation A 63-year-old Caucasian woman presented with hemodynamic effective macrohematuria (hemoglobin 70 g/liter) 4 days after ureterorenoscopy and biopsy of the upper pole collecting system due to recurrent microhematuria. Duplex-sonography and computed tomography angiography revealed multiple arteriovenous fistulas and erosions into the calyceal system. Intra-arterial digital subtraction angiography confirmed this condition. After superselective embolization of the arteriovenous fistulas, the patient had no further episodes of bleeding or microhematuria. Conclusion If malignancies, urolithiasis or urinary tract infections are ruled out by common diagnostic procedures as the cause of recurrent minor or gross hematuria, the possibility of arteriovenous fistulas should be included in the differential diagnosis and Duplex-Sonography or the more invasive selective renal arteriography should be performed as this is the most definitive method for diagnosing arteriovenous fistula. Introduction Although arteriovenous fistulas are rare conditions, they have a considerable clinical impact. In fact they may cause hypertension, local thrombosis, peripheral embolization, high output cardiac failure and hematuria. Although ureterorenoscopy is a minimally invasive and effective diagnostic and therapeutic procedure, it has the potential for significant postoperative complications. We report a case of intrarenal arteriovenous fistulas causing hemodynamic effective anemia 4 days after ureterorenoscopic biopsy. Case presentation A 63-year-old woman presented with recurrent microhematuria. She had no history of flank pain, macrohematuria, hypertension, renal trauma or percutaneous instrumentation. Physical examination was normal and specifically, there was no abdominal bruit on auscultation. She had a blood pressure of RR 130/80 mmHg. Routine laboratory tests were within normal limits. Urinalysis showed no evidence of infection but was positive for erythrocytes. An initial renal ultrasound revealed a dis-crete hypoechogeneity of the left upper renal pole. An intravenous pyelogram was performed demonstrating an irregular configuration of the upper pole collecting system, which was also seen in a retrograde ureteropyelography ( Figure 1). Cystoscopy as well as ureterorenoscopy (URS) revealed no suspicious formation within the bladder or along the left ureter or in the renal pelvis. Tissue around blood clots in the upper calyceal group was biopsied. Cytology and histology did not identify malignant cells. The patient was discharged with a ureteral stent. Four days after the intervention, emergency admission was necessary due to a hemodynamic effective macrohematuria (hemoglobin 70 g/liter) causing a bladder coagulum, which made transurethral evacuation necessary. Duplex-sonography and computed tomography angiography (CTA) were then carried out and revealed multiple arteriovenous fistulas (AVFs) and erosions into the calyc-eal system. Intra-arterial digital subtraction angiography (i.a. DSA, Figure 2) in the early arterial phase showed arteriovenous fistulas between a subsegmental branch of the renal artery and the renal vein and these were superselectively embolized by 8 Platin-coils with cotton filaments. Angiographically, no significant differences in parenchymal perfusion were noted before and after intervention. Pathologic neoplastic vessels were ruled out radiomorphologically. Five months after intervention, a control computed tomogram showed no recurrent AVF or malignancy. The patient had no further episodes of bleeding or microhematuria. Discussion Arteriovenous fistulas, first described by Varela in 1928, are rare conditions, which, however, have a considerable clinical impact [1]. In fact, they may cause hypertension, local thrombosis, peripheral embolization, high output cardiac failure and hematuria [2]. There are two types of AVF, classified as congenital and acquired [3]. In total, 70 to 80% of all AVFs are of the acquired type and may be secondary to trauma, renal surgery, inflammation, neoplasia or percutaneous needle biopsy, the latter contributing to recent increased incidence. Acquired renal AVFs may be located throughout the whole kidney. Angiographically, they appear as solitary communications between arteries and veins. More than 70% of these fistulas close spontaneously within a few weeks or months without active intervention. Therefore the common strategy in asymptomatic patients with incidental detection of AVFs is to 'wait and watch' [4]. In 20 to 30% of all cases, an arteriovenous fistula is a congenital condition usually located in the upper pole (45% of cases) but may also appear in the midportion or the lower pole of the kidney in equal ratio topographically beneath the calyceal or pelvic mucosa. Congenital AVFs are characterized angiographically, as in our patient, by their cirsoid configuration with multiple communications between arteries (main or segmental renal arteries) and veins [2,4]. Based on the angiographic criteria, a second form of congenital AVF exists which is classified as the aneurysmal type and has been mentioned in the literature as a spontaneous or idiopathic fistula [4]. While the latter predominantly present with cardiovascular symptoms, the cirsoid forms show a high incidence of gross hematuria [2]. In the pericalyceal renal parenchyma, the small interlobular arteries and their corresponding veins as well as existing AVFs are in close proximity to the collecting system. This explains recurrent hematuria in more than 75% of individuals and possible filling defects or reduced function of the affected kidney in the excretory urography, but Figure 1 Retrograde ureteropyelography. Retrograde ureteropyelography demonstrates an irregular configuration of the upper pole collecting system. these are absent in 50% of cases [2]. In our patient, we postulate that, due to the biopsy during endoscopic intervention, a perforation had occurred and venous dilatations of the AVFs eroded into the collecting system causing gross hematuria. Active management was necessary due to hemodynamic effective gross hematuria. Selective renal arteriography, as the most definitive method for diagnosing the lesion, was performed with simultaneous superselective coil embolization. This treatment method is well accepted in such conditions since it avoids surgery. Parenchymal infarction secondary to embolization can be limited to the region which is supplied by the artery containing the lesion. This is especially important in patients with only one functioning kidney or renal insufficiency. The technique is also indicated in patients who are considered poor surgical candidates since the procedure is performed under local anesthesia with low morbidity and low risk of complications [5][6][7]. Retrograde ureteropyelography In contrast to patients presenting with hematuria, we suggest nephrectomy or partial nephrectomy as the treatment of choice in individuals with symptoms of alterations in the cardiovascular system such as renin-mediated hypertension due to fistula-related relative ischemia or highoutput cardiac failure caused by increased venous return. Conclusion Congenital AVFs are rare conditions which may cause cardiovascular complications (in 50% of cases) and recurrent hematuria in more than 75% of individuals. If malignancies, urolithiasis or urinary tract infections are ruled out by common diagnostic procedures as the cause of recurrent minor or gross hematuria, the possibility of AVFs should be included in the differential diagnosis and Duplex-Sonography, or the more invasive selective renal arteriography, as the most definitive method for diagnosing AVF, should be performed. Depending on the general condition of the patient and their symptoms, the treatments of choice include nephrectomy and partial nephrectomy but most urologists aim for superselective embolization.
2018-04-03T06:10:25.195Z
2008-10-12T00:00:00.000
{ "year": 2008, "sha1": "50bc3f596673d49fcb1ffddb39c4d1ad7ce48703", "oa_license": "CCBY", "oa_url": "https://jmedicalcasereports.biomedcentral.com/track/pdf/10.1186/1752-1947-2-326", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e5d4c30fd00131557856733805ae74e2f134f2f4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
90565522
pes2o/s2orc
v3-fos-license
The welfare of layer hens in cage and cage-free housing systems Historically, animal welfare has been defined by the absence of negative states such as disease, hunger and thirst. However, a shift in animal welfare science has led to the understanding that good animal welfare cannot be achieved without the experience of positive states. Unequivocally, the housing environment has significant impacts on animal welfare. This review summarises how cage and cage-free housing systems impact some of the key welfare issues for layer hens: musculoskeletal health, disease, severe feather pecking, and behavioural expression. Welfare in cage-free systems is currently highly variable, and needs to be addressed by management practices, genetic selection, further research, and appropriate design and maintenance of the housing environment. Conventional cages lack adequate space for movement, and do not include features to allow behavioural expression. Hens therefore experience extreme behavioural restriction, musculoskeletal weakness and an inability to experience positive affective states. Furnished cages retain the benefits of conventional cages in terms of production efficiency and hygiene, and offer some benefits of cage-free systems in terms of an increased behavioural repertoire, but do not allow full behavioural expression. In Australia, while the retail market share of free-range eggs has been increasing in recent years, the majority of hens (approximately 70%) remain housed in conventional cages, and furnished cages are not in use. Unlike many other countries including New Zealand, Canada, and all those within the European Union (where a legislated phase-out commenced in 1999 and was completed in 2012) a legislative phase-out of conventional cages has not been announced in Australia. This review came about in light of the current development of the Australian Animal Welfare Standards and Guidelines for Poultry in Australia. These standards are intended to provide nationally consistent legislation for the welfare of all poultry species in all Australian states and territories. While it is purported that the standards will reflect contemporary scientific knowledge, there is no scientific review, nor scientific committee to inform the development of these Introduction There are three main science-based frameworks which have been used to understand animal welfare (Fraser, 2003;Hemsworth et al., 2015). These include: 1) biological functioning; an animal's ability to cope with its environment and whether its needs are met; 2) affective state; an animal's subjective experiences, and 3) natural living; the ability for an animal to live according to its nature and perform normal behaviours (Broom, 1986;EFSA, 2005;Fraser, 2003;Hemsworth et al., 2015). Historically, animal welfare has been defined by the absence of negative experiences such as disease, hunger, thirst, stress or reduced fitness (Bracke and Hopster, 2006). Indeed, the majority of animal welfare research in the last 40 years has focused on the avoidance of negative states. However, there is increasing interest and research in the experience of positive welfare states in animals . This shift in welfare science has led to the understanding that good animal welfare cannot be achieved without the experience of positive affective states such as feeling comfort, pleasure, and a sense of control (Mellor and Beausoleil, 2015). The Five Freedoms were developed by the United Kingdom Farm Animal Welfare Council and released in 1979. The principles form a basic qualitative framework on which welfare schemes and welfare assessment tools have been based. The Five Freedoms have been highly influential in the development and scope of animal welfare standards internationally. While they do not prescribe specific conditions, they were the first to include subjective experiences, health status and behaviour in one framework (Mellor, 2016). Over time, they have resulted in a shift in animal welfare assessment away from a focus on biological functioning towards a focus on the animal's experiences. The Five Domains of Potential Welfare Compromise (commonly referred to as the Five Domains) model was originally developed as a framework with which to assess the welfare of animals used in research in 1994 (Mellor and Reid, 1994). It was subsequently adopted in 1997 as part of regulatory requirements for assessing the welfare of animals used for scientific purposes in New Zealand. The model integrates biological functioning and affective states by considering internally regulated, as well as externally generated inputs (Mellor and Beausoleil, 2015). The physical considerations of the model comprise nutrition, the environment, health and behaviour, while the fifth domain considers mental state, or affective experiences. A compromise in any of the physical domains also influences the emotional experience directly. For example, food deprivation leads to the affective experience of hunger, which in turn may lead to further negative mental states, such as frustration or stress Mellor and Beausoleil, 2015). Thus, good welfare involves a combination of adequate nutrition, an appropriate environment, optimal health, the expression of normal behaviours, and positive mental experiences. The Five Domains model has recently been adapted to allow the assessment of positive as well as negative experiences to encourage more opportunities for animals to experience positive states whilst minimising negative states (Mellor, 2013;Mellor and Beausoleil, 2015). Assessment of animal welfare and the management of animals in the future will require an emphasis on the experience of positive affective states . Innate, or 'normal' behaviours are those which are inherent to animals, and typically, which animals are motivated to carry out. The performance of these behaviours is thought to be a component of biological functioning, is pleasurable, and necessary to avoid stress (Bracke and Hopster, 2006). In layer hens, innate behaviours include dustbathing, perching, foraging, and nesting. These behaviours are often driven by internal factors, and are internally and physiologically regulated (Hughes and Duncan, 1988). The need to express normal behaviours, the level of satisfaction these behaviours provide, and the amount of frustration caused by their inhibition, can be scientifically assessed. This may be done by measuring the intensity, duration, and incidence of particular behaviours (Bracke and Hopster, 2006). Frustration is an aversive state which arises when animals are prevented from performing a behaviour that they are strongly motivated to perform (Fraser et al., 2013). Housing can create welfare problems when it causes frustration. The opportunity for hens to perform behaviours which they are motivated to perform is central in achieving positive welfare states (Mellor and Webster, 2014). In Australia, the market share of free-range eggs sold at retail has been increasing in recent years. However, the majority of layer hens (approximately 70%) are housed in conventional cages. Furnished cages are currently not in use. Unlike several other countries including New Zealand, Canada, and all those within the European Union, where a legislated phase-out commenced in 1999 and was completed in 2012, a legislative phase-out of conventional cages has not been announced in Australia. While conventional cages are more hygienic, contribute to a lower incidence of infectious diseases, allow easier management, and are cheaper to operate, they do not provide adequate space per hen, hens experience extreme behavioural restriction, and the lack of movement causes metabolic disorders, high rates of disuse osteoporosis, and the birds experience severe frustration due to the prevention of normal behaviours such as nesting (Duncan, 2001). This review is in light of the current development of the Australian Animal Welfare Standards and Guidelines for Poultry in Australia. The Standards and Guidelines are intended to streamline poultry welfare legislation across Australia and improve welfare outcomes for all poultry species. This process is funded by the poultry industries as well as state and federal Governments. It is purported that the development came about in recognition of 'significant advances in husbandry practices, technology, and in available science, since the current code was endorsed in 2002', and that the standards will 'aim to reflect contemporary scientific knowledge, provide competent animal husbandry advice, meet mainstream community expectations, and that can be maintained and enforced in a consistent, cost-effective manner' (Animal Health Australia, 2016). Indeed, while there are a number of factors which influence societal use of animals and determining acceptable animal use, including human health, economic, social, and environmental factors, science provides the means to understand the impact of animal use on the animal. Therefore, science has a very prominent role in underpinning decisions on animal use, and the conditions under which animals are kept . However, unlike in many other countries, there is no scientific review conducted to inform the development of farm animal welfare standards in Australia, and no scientific advisory committee, making it impossible for the standards to reflect contemporary scientific knowledge. The Australian Government Productivity Commission conducts independent research and acts as an advisory body. In late 2016, the Productivity Commission produced a report on the Regulation of Australian Agriculture which World's Poultry Science Journal, Vol. 73, December 2017 769 includes animal welfare policy governance in Australia. The report highlights concerns with the lack of national leadership and flaws in national standard-setting. Key points in the report include that the current approach to developing national standards and guidelines for farm animal welfare needs to be improved by relying more on rigorous science and evidence of community values. They proposed that an independent statutory agency be stablished to develop the standards (Australian Government Productivity Commission, 2016). Evaluation of contemporary scientific knowledge is necessary to understand what is known about how different housing environments may be affecting hen welfare, inform decision-making, and identify important areas for future research. There is a need for comprehensive, independent scientific literature reviews to cover all welfare aspects for the housing and husbandry of all poultry species. This review aims to summarise scientific literature on the welfare of layer hens housed in the various housing systems: conventional cages, furnished cages, and cage-free, in relation to four key areas: musculoskeletal health, disease, severe feather pecking, and behavioural expression. There are many more factors which can affect hen welfare including nutrition, the environment (air quality, lighting, environmental enrichment, access to resources), genetics, group size, predation, social environment, stocking density and space allowance, management practices, and husbandry and the humananimal relationship which are not covered in this review. Skeletal health Commercial layer hens have been genetically selected for production characteristics, including rate of lay. Their ancestors, the red jungle fowl, lay approximately 10 to 15 eggs per year (Romanov and Weigend, 2001). In comparison, the modern-day layer hen can lay over 350 eggs per year. They also have a higher growth rate, heavier adult body weight, earlier sexual maturity, and larger egg sizes than the red jungle fowl. The formation of egg shells requires the deposition of calcium. The high rate at which eggs are laid therefore leads to a loss of bone calcium and consequently high rates of osteoporosis, skeletal fragility, and susceptibility to fractures. Bone fragility and muscle weakness are exacerbated when birds are unable to move and exercise sufficiently (Webster, 2004;Widowski et al., 2013). The extreme behavioural restriction hens experience in conventional cages therefore further contributes to bone weakness, and hens often suffer from disuse osteoporosis (LayWel, 2006). Hens in conventional cages suffer the poorest bone strength, and the highest rate of fractures at depopulation than in all other housing systems (Widowski et al., 2013). By contrast, hens in cage-free systems experience the best musculoskeletal health. A study by Rodenburg et al. (2008) comparing cage-free and furnished cages found that hens in cage-free systems had stronger wing and keel bones than hens in furnished cages. An increased behavioural repertoire and the ability to exercise, including walking, running, perching, wing-flapping, and flying increases musculoskeletal strength, and decreases the incidence of osteoporosis and fractures which occur during depopulation. However, bone fractures are a risk when hens fall or sustain injuries during flight on objects such as perches, feeders, drinkers, or nest boxes within the shed (Lay et al., 2011;Fraser et al., 2013). Therefore, hens in cage-free systems can experience more fractures during the laying period than those in cage systems (Lay et al., 2011;Widowski et al., 2013). Furnished cages typically allow hens to perch, which contributes to improved bone strength (Lay et al., 2011). Hester (2014) suggested that the addition of perches to cages may be a compromise that allows the benefits of cages to be retained (improved liveability and lower respiratory disease), while better meeting the behavioural needs of layer hens. Hens in furnished cages exhibit the lowest number of total fractures compared with both cage-free and conventional cage systems. This is probably due to improved musculoskeletal health due to the use of perches when compared to conventional cages, and an absence of the environmental complexity which can be present in cage-free systems (Widowski et al., 2013). Rodenburg et al. (2008) compared keel bone fractures in furnished cages, floor housing, and aviary systems and found that there were significantly fewer hens with keel bone fractures in furnished cages compared with the cage-free systems (62%, 82% and 97% of birds with fractures, respectively). Hester (2014) stated that although perches contribute to the incidence of keel fractures, hens should have access to perches due to the high motivation to perch, improved bone strength, improved feather quality, and improved foot pad, toe, and nail health. Further, there is no deleterious effect of perches on production besides dirty and cracked eggs (Hester, 2014). Bone strength has been found to be heritable. Genetic selection is extremely effective in improving bone strength and resistance to osteoporosis (Fleming et al., 2006), with bone strength improving over just one or two generations (LayWel, 2006). A study by Fleming et al. (2005) found that hens selected for improved bone strength also had significantly higher egg production. The number of fractures sustained by hens in cage-free systems should be addressed through a combination of selective breeding, optimised diets, the provision of perches, plus improvements in the design, placement, and maintenance of structures in the shed, including perches (LayWel, 2006;Widowski et al., 2013). Further research is required into optimal perch design and placement, the effect on keel bone damage, and how chicks and pullets learn to use perches (Hester, 2014). Disease Generally, there is a reportedly higher incidence of bacterial infections, viral diseases, coccidiosis, and red mites in litter-based and free-range systems than in cage systems (Rodenburg et al., 2008;Fossum et al., 2009;Widowski et al., 2013). Contact with soil, litter, faeces, and other vectors including rodents and insects increases the risk of infectious diseases. Birds with access to the outdoors may have a higher risk of contracting diseases such as avian influenza, Newcastle disease, and ectoparasites from wild birds (Lay et al., 2011;Widowski et al., 2013), while red mites often reside in the environment (Chauve, 1998;Lay et al., 2011;Fraser et al., 2013). The risk of infectious diseases can be significantly lowered by proactive approaches such as biosecurity and vaccination programmes (Martin, 2011;Fraser et al., 2013). Health and hygiene practices have led to a decline in the proportion of birds with viral, parasitic, and non-infectious diseases in cage-free systems in Switzerland (Kaufman-Bart and Hoop 2009). Four approaches to infectious disease control have been suggested by Fraser et al. (2013). These include: 1) protecting individual animals through hygiene, vaccination, and anti-parasite treatments, 2) preventing disease spread within a farm, 3) preventing the entry of diseases onto farm, and 4) eliminating diseases over large areas. Hens in conventional cages may experience metabolic disorders due to lack of exercise. Caged hens can show paralysis around peak production, termed 'cage layer fatigue' which is due to bone weakness, fractures of the thoracic vertebrae, and compression World's Poultry Science Journal, Vol. 73, December 2017 771 of the spinal cord (Duncan, 2001). Other non-infectious diseases including fatty litter and disuse osteoporosis are more prevalent in conventional cages compared with systems that allow greater opportunities for behavioural expression and movement (Weitzenbürger et al., 2005;Kaufman-Bart, 2009;Lay et al., 2011;Widowski et al., 2013). Fatty liver is a metabolic disease typically seen in conventional cages (EFSA, 2005;Jiang et al., 2014) which can cause rupture of the liver and sudden death. The main factors which are thought to contribute to the development of fatty liver include lack of exercise and restricted locomotion, high environmental temperatures, and a high level of stress (EFSA, 2005). Non-infectious diseases such as disuse osteoporosis and fatty liver which are associated with lack of movement and exercise are difficult to treat or manage in conventional cages due to the inherent behavioural restriction in these systems. Severe feather pecking and cannibalism Severe feather pecking, whereby hens vigorously peck at and pull out the feathers of other birds, is a significant welfare concern in the layer industry. It has been documented in all types of housing systems including cage, litter-based, free-range, and aviary (Appleby and Hughes, 1991;Huber-Eicher and Sebö, 2001;Bestman et al., 2009). Research suggests that there is a small proportion of birds which initiate severe feather pecking, and that the behaviour may then spread throughout a flock (Bessei and Kjaer, 2015). Therefore, housing birds in large groups, as commonly occurs in cage-free systems, may contribute to an increased prevalence of severe feather pecking (Hughes, 1995;McAdie and Keeling, 2000;Potzsch et al., 2001). Mitigating the risk and spread of severe feather pecking is critical for hen welfare. Management practices which may minimise the risk of severe feather pecking include the provision of adequate nutrition, appropriate feed form, high-fibre diets, suitable litter from an early age onwards, no sudden changes in diet or environmental conditions, minimising stress and fear in the birds, provision of environmental enrichment, appropriate rearing conditions, good husbandry and matching the rearing and laying environments (Kjaer and Bessei, 2013;Rodenburg et al., 2013;Hartcher et al., 2016). Severe feather pecking is heritable (Savory, 1995;Kjaer and Bessei, 2013;Bessei and Kjaer, 2015), and current studies are investigating traits which may predispose particular birds to initiate the behaviour, to enable genetic selection against these traits. Management practices should therefore be paired with genetic selection programmes. This approach, as well as further research into this area, has the potential to reduce the prevalence of severe feather pecking (LayWel, 2006;Rodenburg et al., 2013;Bessei and Kjaer, 2015;Hartcher et al., 2016). Movement Animals require an absolute amount of physical space to extend their limbs and perform basic movements including changing posture and turning around. The amount of space required for a hen to turn around and stretch its wings is greater than the space which is provided in most conventional cages (Widowski et al., 2016). Hens have been found to perform 'rebound' levels of wing-flapping, tail wagging, and stretching when they are moved to a larger space after weeks of confinement in a small area, with some behaviours correlated to the duration of confinement. This indicates that hens do not adjust to prolonged spatial restriction (Nicol, 1987;Lay et al., 2011). 772 World's Poultry Science Journal, Vol. 73, December 2017 Furnished cages generally allow more movement than conventional cages. They typically provide more space, perches, enclosed nests, substrate, and an area to scratch. Behaviour is therefore more unrestricted and varied than in conventional cages, hens are able to perform some of their most highly motivated behaviours, and physical condition is better (Appleby et al., 2002). However, the extent to which behaviours are able to be expressed in furnished cages has been questioned (Cronin et al., 2012). Litter is often delivered in insufficient quantities, and locomotion, groundscratching, wing-flapping and flying are inherently limited or prevented in caged environments (Appleby et al., 2002;Lay et al., 2011). Cage-free systems generally allow greater opportunities for locomotion and basic movements than in cages. Locomotion is increased because resources are spread out horizontally and sometimes vertically. However, stocking densities may be inhibitive, and must be low enough to facilitate movement and behavioural expression (Leone and Estevez, 2008;Lay et al., 2011). Studies have demonstrated that hens are highly motivated to perform a number of innate behaviours including perching and finding a nesting site, and when housing constraints prevent hens from performing these behaviours, they can experience frustration and emotional distress which may be exhibited by stereotypic back-andforward pacing behaviour (Fraser et al., 2013). There may also be physical consequences including compromised biological function or harmful variants of the restricted behaviour such as feather pecking or hysteria (Lay et al., 2011). Correspondingly, studies have reported lower levels of frustration in systems where hens are able to express behaviours such as nesting and dustbathing (Zimmerman et al., 2000;Widowski et al., 2013). Perching The provision of perches allows hens to perform their normal perching behaviour, therefore satisfying a behavioural demand (Lay et al., 2011). Hens have demonstrated a strong motivation to access perches in behavioural tests, for example by pushing through weighted doors (Olsson and Keeling, 2002), and almost all hens use perches at night if adequate perch space is provided (Blokhuis, 1983;1984;Appleby et al., 2002;Lay et al., 2011;Fraser et al., 2013). Hens show signs of unrest when they are deprived of the opportunity to perch at night, and experience frustration and reduced welfare if perching is not possible (Olsson and Keeling, 2002;Fraser et al., 2013). The use of perches can reduce fearfulness and aggression (Donaldson and O'Connell, 2012), reduce bird density on the floor (Cordiner and Savory, 2001), lower the risks of piling and smothering (Lay et al., 2011), improve motor activity, and provide resting locations and places of refuge from aggressors (Cordiner and Savory, 2001;Lay et al., 2011;Yan et al., 2014). The provision of perches within the first four weeks of life has also been shown to reduce the risk of cloacal cannibalism in adulthood (Gunnarsson et al., 1999). The use of perches has been shown to improve bone strength (Lay et al., 2011) and musculoskeletal health due to exercise . Enneking et al. (2012) provided pullets with perches from one day to 17 weeks of age. Birds with perch access had greater bone mineral content of the tibia, sternum and humerus, as well as greater muscle deposition at 12 and 71 weeks of age compared to birds without access to perches (Enneking et al., 2012;Yan et al., 2014). In cages, perches also give reprieve from standing on a sloped wire floor (Hester, 2014). While there are welfare benefits to providing perches, their use can also present risks to World's Poultry Science Journal, Vol. 73, December 2017 773 welfare. Perches and other structural features within the housing environment such as tiers in aviary systems can cause keel bone deformities and foot pad lesions, and there is risk of fractures if birds do not land successfully when jumping or flying between perches or tiers in cage-free systems (Lay et al., 2011;Heerkens et al., 2016). Perches installed in cages can also cause keel bone fractures and deformities, although at a lower rate than in cage-free systems (Hester, 2014). Poorly designed and maintained perches have been associated with bumblefoot due to an accumulation of droppings and litter (Lay et al., 2011), and perches in furnished cages have been associated with an increased risk of cloacal cannibalism. Pickel et al. (2011) investigated perch shape and type and the effects on hens' keel bones and foot pads. Certain designs, such as those with a soft surface, larger surface area, and a hygienic surface may be important in minimising risks to keel bones, foot health, and subsequently hen welfare (Pickel et al., 2011). Management practices can have an effect on perch use. In particular, the rearing environment and whether pullets are provided with perches during rearing, the stocking density during the laying period, and the lighting programme all affect how hens utilise perches. Rearing without early access to perches appears to cause low muscle strength, a lack of motor skills, an inability to keep balance, and impaired cognitive spatial skills, with long-lasting effects on welfare (EFSA, 2015). Therefore, providing perches during the rearing period enhances the ability to utilise them in the laying period, and also reduces the incidence of floor eggs (Gunnarsson et al., 1999;Lay et al., 2011). A recent review by Janczak and Riber (2015) recommended that the rearing system should provide constant access to perches. The welfare issues associated with perches may be partly addressed by good management, and perch placement, type and design. For example, the risk of unsuccessful landings, and therefore bone deformities and fractures, may be reduced by perch type and placement (Scott et al., 1997;Lay et al., 2011). Heerkens et al. (2016) found that the provision of ramps was effective in reducing keel bone and foot pad problems, and suggested that the adaptation of housing systems combined with genetic selection programmes may offer effective methods to improve hen welfare (Heerkens et al., 2016). Nesting Nesting behaviour is a priority for hens (Weeks and Nicol, 2006;Lay et al., 2011), and is important for their welfare (Cooper and Albentosa, 2003;Weeks and Nicol, 2006;Cronin et al., 2012;Widowski et al., 2013). The need for layer hens to perform pre-laying behaviour and utilise a nest has been assessed by motivation tests, which have consistently demonstrated that it is a high priority (Widowski et al., 2013). The majority of layer hens prefer to lay their eggs in a discrete nest site (Appleby et al., 2002;Weeks and Nicol, 2006;Cronin et al., 2012), and the strength of the motivation to access a nest box has been demonstrated in a number of different ways. Cooper and Appleby (2003) concluded that hens' work-rate to access a nest 20 minutes prior to egglaying, as measured by the extent to which they were willing to work by pushing a pushdoor for resources, was twice the work-rate to access food after four hours of confinement without feed. Similarly, Zimmerman et al. (2000) found that hens exhibited greater frustration when a nest was denied than feed and water deprivation. Hens which were denied an appropriate nest site at oviposition expressed frustration through specific, gakel-call vocalisations. In conventional cages, where there are no secluded nest sites, hens have expressed 774 World's Poultry Science Journal, Vol. 73, December 2017 frustration in the form of repetitive, stereotyped pacing (Yue and Duncan, 2003;Lay et al., 2011), and the retention of eggs beyond the expected time of lay (Yue and Duncan, 2003;Widowski et al., 2013). Hens prefer to lay eggs in a nest rather than on a sloping wire floor, and the lack of a nest may reduce welfare (Hughes et al., 1989;Lay et al., 2011). In addition to satisfying a behavioural demand, a closed nest area can reduce cloacal cannibalism. (Newberry, 2004;Lay et al., 2011). While there have been a number of studies which assessed the behavioural motivation for hens to access nest boxes, taking physiological measurements is not as straightforward, and there is a lack of information on the physiological stress responses of hens when nest boxes are denied. Complications associated with taking physiological measurements of stress include the highly variable peak in plasma corticosterone prior to egg-laying which may confound measurements (Cronin et al., 2012). Some studies have investigated the correlation between the concentration of corticosterone in plasma and egg albumen, although there have been conflicting findings, and there is a need to investigate this relationship and how corticosterone may be used as part of welfare assessments. Downing and Bryden (2008) found a positive correlation, which suggested that corticosterone in egg albumen may provide a non-invasive measure of stress. However, Engel et al. (2011) found few correlations between corticosterone concentrations in plasma, albumen, yolk, and faeces. Corticosterone may be deposited into egg albumen over an 8-hour period each day, while hens typically display pre-laying behaviours 1-2 hours prior to egg laying. Corticosterone in albumen may therefore not be a useful indicator of stress associated with nesting. Research on the stress physiology of hens in relation to egg-laying behaviour is very limited, and the correlation between corticosterone concentrations in plasma and egg albumen needs further validation (Cronin et al., 2012). Dustbathing Functionally, dustbathing is performed to clean the feathers (Lay et al., 2011). It acts to remove skin parasites, regulate the amount of feather lipids, and maintain plumage condition (Olsson and Keeling, 2005). Hens have been found to work to obtain a dustbathing substrate, and after deprivation of dustbathing, are more motivated to dustbathe, indicating that it is a high priority (Widowski and Duncan, 2000). However, the evidence on the motivation to access substrate for dustbathing is not as conclusive as the motivation to access other resources, and some some sstudies have not found any evidence of a motivation to dustbathe. It has been suggested that the methodology used, for example, whether or not hens can see the litter in the experiments, may affect differences between studies (Olsson and Keeling, 2005). Conventional cages have no provisions for dustbathing. Sham dustbathing can occur, where hens perform dustbathing movements on the wire floor, which would normally include scooping dust into the plumage. However, the dustbathing sequence cannot be completed, as there is no substrate, nor shaking off lipid-saturated dust. Sham dustbathing lacks positive feedback (Widowski and Duncan, 2000), does not satisfy birds' motivation for dustbathing (Olsson and Keeling, 2005), and indicates a reduced state of welfare (Lay et al., 2011). Further, when birds are unable to dustbathe, plumage is in a poorer condition as it is dirtier, less waterproof and less insulative (Scholz et al., 2014). Furnished cages have some provisions for dustbathing (Appleby et al., 2002;Lay et al., 2011). However, the extent to which dustbathing is accommodated in furnished cages is variable, and significant variation has been found in the use of dustbaths between different types of furnished cages (Tauson, 2005). Typically, hens in furnished cages only World's Poultry Science Journal, Vol. 73, December 2017 775 receive very small amounts of feed as litter material once per day onto an Astroturf mat in the main area of the cage to allow foraging and dustbathing. The hens' propensity to forage keeps the mat relatively clean and minimises its use for egg laying (Lay et al., 2011). However, since litter is provided in small quantities, it is often quickly depleted. Restricted access to litter, and the small amounts provided, can cause stress (Lay et al., 2011), and subordinate hens may be excluded from the litter area by more dominant hens (Shimmura et al., 2008). Cage-free systems have the ability to provide adequate materials to facilitate dustbathing. However, not all systems provide dustbathing material. It is possible for cage-free systems to have plastic or wire flooring. If dustbathing material is provided, regular monitoring and maintenance is often required to keep the litter from becoming wet and avoiding associated conditions including high concentrations of atmospheric ammonia and contact dermatitis (Widowski et al., 2013), or too dusty, and avoiding negative health implications associated with high levels of dust in the air (Rodenburg et al., 2005). Foraging and exploration Foraging is a key part of the normal behavioural repertoire of chickens (LayWel, 2006). Litter is an important element of the birds' environment, and caged hens have a high demand for a litter substrate (Gunnarsson et al., 2000). It is preferred over wire mesh by hens, and is necessary for the normal expression of some behaviour patterns (Dawkins, 1981). When litter is available, it is used extensively by hens for scratching and pecking (Ekesbo, 2011;Hartcher et al., 2015). Further, hens perform foraging behaviours even when feed is provided ad libitum (Lay et al., 2011;Widowski et al., 2013), a phenomenon termed 'contrafreeloading', demonstrating an innate behavioural motivation to forage for food. Foraging behaviour is not possible in conventional cages, and is only partially accommodated in furnished cages, where substrate may be insufficient, or quickly depleted. Environmental complexity is extremely limited in both conventional and furnished cage systems, which limits the hens' ability to explore their environment and forage (LayWel, 2006). Allowing hens to access an outdoor area improves opportunities for behavioural expression including foraging, exercising, and exploring. If the range area is wellmaintained, easily accessible from the shed, offers shade and shelter, and is attractive to birds, this will enhance its use. When birds utilise outdoor areas, this lowers the stocking density inside the shed, and can result in increased locomotion and exercise, and improve inter-individual distances and normal social behaviours (Knierim, 2006). Hens are motivated to forage. Access to good quality, well-maintained litter is critical to their welfare to maintain good plumage condition, improve the feeling of satisfaction, and potentially reduce adverse behaviours such as severe feather pecking (Rodenburg et al., 2013). Conclusions Hens in cage systems have the lowest risk of contracting and transmitting infectious diseases and severe feather pecking. They also suffer fewer fractures during the laying period, which is likely due to the lack of environmental complexity in these systems. However, hens in conventional cages experience extreme behavioural restriction, suffer 776 World's Poultry Science Journal, Vol. 73, December 2017 the poorest musculoskeletal strength of all housing systems, and the highest number of fractures at depopulation. They also experience the highest rate of some non-infectious diseases, including fatty liver and disuse osteoporosis, compared with housing systems which allow greater opportunities for behavioural expression and exercise. Furnished cages retain the benefits of conventional cages in terms of hygiene and efficiency of production, and offer some benefits of cage-free systems. Behavioural expression is increased due to the provision of perches, claw-shortening devices, enclosed nest sites, and, to a certain extent, substrate. Hens in furnished cages have improved musculoskeletal health compared with those in conventional cages, and suffer the fewest number of fractures over their lifetimes compared to both cage-free and conventional cage systems. While furnished cages offer some provisions for dustbathing, their use varies between different types of furnished cages, and hens are often unable to dustbathe satisfactorily due to the depletion or inadequate provision of dustbathing materials. There is also a very limited ability for hens to forage and groundscratch. Therefore, while there are some provisions to allow greater behavioural expression than in conventional cages, the hens' full behavioural repertoire is not able to be expressed in furnished cages. Cage-free systems have the potential to allow hens to express their full behavioural repertoire. This is dependent on stocking densities, flooring material and maintenance, as well as the provision of adequate resources including suitable enclosed nesting sites and ample perch space. Foraging, ground-scratching and dustbathing in particular are able to be fully expressed in cage-free systems: these activities are impossible for hens to perform in conventional cages, and are very limited in furnished cages. Hens in cagefree systems have the best musculoskeletal health, a decreased incidence of osteoporosis, and fewer fractures which occur during depopulation. However, the larger group sizes and ability to perform a greater variety of behaviours also contribute to the shortcomings of cage-free housing systems. Two of the biggest welfare concerns in cage-free systems is the extent to which infectious diseases and severe feather pecking can occur. Another factor which affects welfare is the higher incidence of fractures incurred during the laying period. The incidence of fractures may be addressed by good design, placement and management of structures in the shed. Genetic selection programmes should also be utilised to decrease the sensitivity of hens to osteoporosis and fractures. Similarly, the risk of severe feather pecking may be mitigated by good management practices including adequate diets, suitable environmental enrichment, minimising stress, matching the rearing and laying environments, and pairing this with genetic selection. The risk of infectious diseases may be mitigated by health management practices encompassing biosecurity, vaccination and hygiene programmes. The main risks to hen welfare in cage-free systems are, at present, highly variable, and need to be addressed by management practices, robust welfare standards, genetic selection, and further research. Conversely, the extreme behavioural restriction that hens experience in conventional cages cannot be mitigated. Since the introduction of conventional cages, the scientific assessment of animal welfare has advanced. A major focus for animal welfare science in the future will be the promotion of positive affective experiences while avoiding negative experiences. While cages can allow greater control over the environment and bird health, the full impact on the welfare of the hens needs to be considered. The opportunity to perform behaviours which hens are motivated to perform is central to the experience of positive welfare states. Science should have a very prominent role in underpinning decisions on animal use and the conditions under which animals are kept. Many countries have established scientific committees and independent animal welfare advisory bodies to ensure that World's Poultry Science Journal, Vol. 73, December 2017 777 the development of animal welfare standards are science-based, and many have instituted legislated phase-outs of conventional cages. The Australian Productivity Commission concluded that the current standards-setting process is not adequate and recommended that an independent body be established to develop farm animal welfare standards. However, while it is purported that the standards reflect contemporary scientific knowledge, there is no scientific review, nor scientific committee to inform the development of these standards, and conventional cages are permitted with no phaseout proposed.
2018-10-31T07:35:52.552Z
2017-12-01T00:00:00.000
{ "year": 2017, "sha1": "9c70f9c66e80840be75ae82919e5667d02b3cf68", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1017/S0043933917000812?needAccess=true", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "209a2c4a67a12de1e2f38a71c6afa1f8c57ebdd9", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Economics" ] }
268361156
pes2o/s2orc
v3-fos-license
Long-term geometric quality assurance of radiation focal point and cone-beam computed tomography for Gamma Knife radiosurgery system To investigate the geometric accuracy of the radiation focal point (RFP) and cone-beam computed tomography (CBCT) over long-term periods for the ICON Leksell Gamma Knife radiosurgery system. This phantom study utilized the ICON quality assurance tool plus, and the phantom was manually set on the patient position system before the implementation of treatment for patients. The deviation of the RFP position from the unit center point (UCP) and the positions of the four ball bearings (BBs) in the CBCT from the reference position were automatically analyzed. During 544 days, a total of 269 analyses were performed on different days. The mean ± standard deviation (SD) of the deviation between measured RFP and UCP was 0.01 ± 0.03, 0.01 ± 0.03, and −0.01 ± 0.01 mm in the X, Y, and Z directions, respectively. The deviations with offset values after the cobalt-60 source replacement (0.00 ± 0.03, −0.01 ± 0.01, and −0.01 ± 0.01 mm in the X, Y, and Z directions, respectively) were significantly (p = 0.001) smaller than those before the replacement (0.02 ± 0.03, 0.02 ± 0.01, and −0.02 ± 0.01 mm in the X, Y, and Z directions, respectively). The overall mean ± SD of four BBs was −0.03 ± 0.03, −0.01 ± 0.05, and 0.01 ± 0.03 mm in the X, Y, and Z directions, respectively. Geometric positional accuracy was ensured to be within 0.1 mm on most days over a long-term period of more than 500 days. Introduction Leksell Gamma Knife (LGK) radiosurgery system serves as an alternative to neurosurgery for various intracranial diseases such as malignant and benign brain tumors, cerebrovascular malformations, and trigeminal neuralgia [1][2][3][4][5], and patients do not require general anesthesia and usually receive treatment while awake.The LGK equips approximately 200 radioactive cobalt-60 sources emitting gamma rays, and these gamma rays converge at a radiation focal point (RFP), called the "unit center point (UCP)" whose coordinates are (100.0,100.0, 100.0) in the LGK coordinate system [6], to deliver a high-dose focused radiation to the target while minimizing radiation damage to surrounding healthy tissue. To achieve precise dose delivery, patients are immobilized using a lightweight frame (Leksell Coordinate Frame (Elekta AB, Stockholm, Sweden) attached to the head with four pins.The treatment plan needs to be generated based on the stereotactic magnetic resonance images (MRI) or computed tomography (CT) that are taken with the frames attached, and the frames must remain in place until treatment planning and treatment are complete.Thus, patients spend a long time with the frames on, and the burden on the patient is high.The frame enables precise fixation of the patient's head during the treatment procedure [7]; therefore, a margin for gross tumor volume is not required to compensate for the uncertainty in patient head positioning, in principle [8].Rigorous geometric quality assurance (QA), confirming that the RFP of the gamma rays corresponds to UCP, is essential for treatment accuracy. The latest clinically available ICON LGK system (Elekta AB) incorporates an on-board cone-beam CT (CBCT), enabling a pre-planning workflow in clinical practice.In the workflow, treatment plans based on the non-stereotactic MRI and CT without frame are generated (pre-plan) before the day of treatment.On the day of treatment, the stereotactic CBCT with frame is acquired and the pre-plan is reoptimized, accounting for the difference in patient position between preoperative non-stereotactic imaging [9].The time for the patients to attach the frame is shortened markedly, and the patient burden is smaller than in a workflow without CBCT.This workflow also allows frameless treatment using a thermoplastic mask for patient head fixation.On the day of treatment, the masked CBCT is registered to the non-stereotactic imaging (MRI or masked CT), and the pre-plan is reoptimized as in the treatment planning procedure using the frame.These workflows for LGK treatment utilizing CBCT on patients immobilized with a frame or mask can improve patient comfort; however, need to ensure the geometric accuracy of CBCT, and the stereotactic coordinate system established by the CBCT system should be precisely aligned with the Leksell coordinate system.American Association of Physicists in Medicine (AAPM) Task Group (TG) 178 stated that the coincidence of the UCP and the RFP and the alignment of the CBCT should be verified before each treatment using the tools and procedures provided by the manufacturer [6].The extensive QA work takes a substantial amount of time.If the accuracy of agreement between UCP and RPF is excellent, QA could be simplified, but few papers have investigated the long-term accuracy of UCP and RPF.One concern in simplifying QA is that cobalt-60 sources have a finite half-life time, requiring periodic source replacement operations.Figure 1 shows the overview of the LGK system (a), and the source replacement requires extensive work to remove the CBCT (b) and patient positioning system (PPS) and rotate the LGK unit (c). This study aims to investigate the geometric accuracy of the RFP and CBCT in relation to the Leksell coordinate system over long periods for the ICON LGK system.Furthermore, the geometric accuracy before and after the cobalt-60 source replacement operation is compared. Materials and methods Ethical approval was not required because our design only involved the use of phantoms.Figure 2 (a) shows an ICON QA tool plus employing a centroid diode detector, and four steel ball bearings (BBs), each with a diameter of 4 mm.These BBs are strategically positioned to ensure that they do not shade each other or the precision diode in the X-ray projection images during CBCT acquisition. The details of Focus and CBCT precision QA are described in the vendor-provided white paper [10].Briefly, in the Focus precision QA, the PPS attached to the QA tool plus moves, and the dose profiles through the radiation shot For each profile, the 47%, 50%, and 53% positions relative to the profile's peak on both rising and falling edges were determined.By repeating this process in reverse, 12 positions in each coordinate direction were obtained, and their average is determined as RFP.The difference in position between measured RFP and UCP is calibrated as the offset values by a service engineer periodically maintenance (once a half year or after replacement of cobalt-60 sources).In the treatment, the treatment plans are adjusted for the known offset to coincide the RFP with the UCP.In the CBCT precision QA (Fig. 2(c)), the positions of BBs in the X-ray projection image during CBCT acquisition are automatically detected, and the BB positions in the next projection image are automatically found around the position in the previous projection image.This procedure is repeated until the positions of BBs are detected in all projections.Subsequently, the 3D position of BBs was calculated based on the identical geometric data (positions of BBs, detector, and X-ray source) utilized in the CBCT reconstruction algorithm.The reference positions of the BBs are recorded, and the reference point is updated by a service engineer periodically maintenance once a year. From 1 April 2022 to 26 September 2023, the Focus and CBCT precision QA were performed before the implementation of LGK treatment for patients and when maintenance is performed by a service engineer.The ICON QA tool plus was manually set on the PPS, while the geometric QA was performed fully automatically to minimize user interaction. The acquisition parameters of CBCT were: tube voltage of 90 kV, tube current of 10 mA, CT dose index of 2.5 mGy, and image resolution of 0.368 mm.The cobalt-60 sources were replaced from the end of April to the beginning of May 2023.The deviation of measured RFP from UCP and the deviation of BB's position from reference position were analyzed from the information recorded in the machine log file.In the log file, the analysis results with and without offset values were recorded.Subsequently, the data excepting the period during the cobalt-60 source replacement operation were divided into two groups: before and after replacement.The Mann-Whitney U test was performed to measure the significance of the difference in Focus and CBCT precision QA between before and after replacement (SPSS software version 27; IBM, Armonk, NY, USA).The statistical significance was set at p < 0.05. Results During 544 days, a total of 269 Focus and CBCT precision QAs were analyzed (186 were obtained before cobalt-60 source replacement, 3 were during replacement, and 80 were after replacement).Figure 3 shows the daily deviation of measured RFP from UCP with and without offset values, and these quantitative values are summarized in Table 1.The calibrated RFP using offset values showed excellent agreement with the UCP; the deviation was within 0.1 mm in all directions on most days.The mean ± standard deviation (SD) was 0.01 ± 0.03, 0.01 ± 0.03, and −0.01 ± 0.01 mm in the X, Y, and Z directions, respectively.Without offset values, the deviations were largest in the Y direction with the mean ± SD of −0.38 ± 0.04 mm, and the maximum deviation was −0.53 mm.Furthermore, the significant shift of magnitude of deviation without offsets was observed after the cobalt-60 source replacement operation (p = 0.001).The deviations with offset values after the source replacement (0.00 ± 0.03, −0.01 ± 0.01, and −0.01 ± 0.01 mm in the X, Y, and Z directions, respectively) were significantly (p = 0.001) smaller than those before the replacement (0.02 ± 0.03, 0.02 ± 0.01 and −0.02 ± 0.01 mm in the X, Y, and Z directions, respectively). Figure 4 shows the daily deviation of BB positions from the reference position, and these quantitative values are summarized in Table 2.The deviations were similar in all BBs, and the daily BB positions were matched with the reference positions; the deviations for all BBs There was no significant difference in deviation of BB positions between before and after the source replacement (p > 0.05). Discussion This study demonstrated the long-term geometric accuracy of RFP and CBCT for the ICON LGK system.Currently, three main types of LGK systems are used in clinical practice: the MODEL C [11], introduced in 1999; the PERFEX-ION [12], introduced in 2006; and the ICON, introduced in 2015.In each system, the UCP, the RFP, and the CBCT (if applicable) are precisely aligned when the LGK system is installed for the first time.Because LGK treatment utilizes a highly steep dose distribution, misalignment due to aged deterioration or incorrect alignment of instruments can directly affect the effectiveness of the patient's treatment. Traditionally film measurement with high spatial resolution is performed to confirm RFP and UCP agreement.Maitz et al. used a specially designed tool, in which a pin pierced a very small hole indicating the UCP, for film irradiation, and the coincidence of UCP and RFP was confirmed [13].Alternatively, Maraghechi et al. utilized a MicroDiamond detector, which has 0.004 mm 3 active volume, and demonstrated that the coincidence between the UCP and RFP was less than 0.3 mm for the ICON system [14].In TG 178, the tolerance of Focus precision QA using the vendorprovided QA tool plus is ≤ 0.2 mm for PERFEXION and ICON LGK systems, and the deviation between the UCP and RFP should not change considerably from 1 day to the following day [6].Our study demonstrated that the accuracy of Focus precision QA was found to be within tolerance over a long-term period when the calibration between UCP and RFP was appropriately performed.The required offset values were different in each direction (Fig. 3), and the magnitude of the offset value was slightly but significantly changed before and after the replacement operation.The reason of the slight change in offset value may be the extensive work required.The fact that significant differences were observed even when the offset value was used may be due to the skill of the engineer who determined the optimal offset value or accurately positioned the LGK system.The half-life period of the cobalt-60 sources is approximately 5.3 years, and thus, periodic source replacement operation (5-7 years) is required due to prolonged treatment time.Our data are valuable because there are not many ICON devices that have reached the time of source replacement.To replace the sources, the PPS, cables, and covers surrounding the shielding are completely removed as shown in Fig. 1, and this extensive operation may affect the offset value.In this study, the deviation became smaller after the source replacement, but it is possible that it could become larger.Careful QA to ensure proper calibration after source replacement may be worthwhile. When installing the ICON system, the establishment of a transformation between the reconstructed image space in CBCT and the Leksell coordinate space is achieved by the manufacturer.Thereafter, the users need to perform the CBCT Precision QA to ensure the accuracy of dose delivery of CBCT-based treatment, and the tolerance is ≤ 0.4 mm.AlDahlawi et al. demonstrated that the mean ± SD of the CBCT precision QA test was 0.12 ± 0.04 mm and the deviation was below tolerance for 2 years [15].Our study supports their data that the geometric accuracy of the CBCT equipped with the ICON system is guaranteed over time, and we provided a new finding that the geometric accuracy of CBCT before and after source replacement was unchanged.This may be explained by the fact that the deviation of CBCT geometry was calculated from the reference positions of the BBs not from the LGK coordinate system. Several limitations are included in our study.First, we focused on the geometric accuracy of the RFP and CBCT, although there are many QA tests that should be performed such as radiation and patient safety tests, mechanical checks, and dosimetric tests [6].Second, the deviations can vary depending on the systems, so a multi-center study is expected.We consider that the daily Focus and CBCT precision QA will continue to be necessary for LGK system [6], as it is usually treated with 0 mm margins.However, in the future, QA may be simplified with a scheduled interval if it can be shown that the accuracy of LGK at multiple facilities can be guaranteed.Finally, this study evaluated the deviation of RFP with the collimator size of 4 mm, while the ICON system can utilize the size of the collimator from 4 to 16 mm accounting for the target size.The magnitude of deviation might be varied depending on the collimator size. In conclusion, the geometric quality assurance of RFP and CBCT for the ICON LGK system is guaranteed within a tolerance reported in AAPM TG 178 for a long-term period.The deviation of RFP was slightly but significantly changed between before and after the cobalt-60 source replacement, and careful QA may be required after the operation. Fig. 1 a Fig. 1 a Overview of the LGK system, and b the source replacement extensive work to remove the CBCT, and c PPS and rotate the LGK unit Fig. 2 a Fig. 2 a ICON quality assurance (QA) tool plus, and schematic overview of b Focus and c CBCT precision QA Fig. 3 1 Fig. 3 Daily deviation of measured radiation focal point from unit center point with and without offset values in three directions Fig. 4 2 Fig. 4 Daily deviation of ball bearings positions from reference position
2024-03-13T06:17:55.708Z
2024-03-11T00:00:00.000
{ "year": 2024, "sha1": "4ff32d4d461cb1c8561d92e72a9e9b449482052f", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12194-024-00788-9.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "d94de2ee14743e364ab995ae7a7ff36ff124a080", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
225533793
pes2o/s2orc
v3-fos-license
Interaction of 2,6,7-Trihydroxy-Xanthene-3-Ones with Iron and Copper, and Biological E ff ect of the Most Active Derivative on Breast Cancer Cells and Erythrocytes : Metal chelators can be potentially employed in the treatment of various diseases, ranging from metal overload to neoplastic conditions. Some xanthene derivatives were previously reported to complex metals. Thus, in a search for a novel iron or copper chelator, a series of 9-(substituted phenyl)-2,6,7-trihydroxy-xanthene-3-ones was tested using a competitive spectrophotometric approach. The most promising compound was evaluated in biological models (breast adenocarcinoma cell lines and erythrocytes). In general, substitution of the benzene ring in position 9 had a relatively low e ff ect on the chelation. Only the trifluoromethyl substitution resulted in stronger chelation, probably via a positive e ff ect on solvation. All compounds chelated iron, but their copper-chelating e ff ect was only minimal, since it was no longer observed under highly competitive conditions. Interestingly, all compounds reduced both iron and copper. Additional experiments showed that the trifluoromethyl derivative protected erythrocytes and even cancer cells against excess copper. In are iron chelators, which are also capable of reducing iron / copper, but the copper-reducing e ff ect is not associated with increased copper toxicity. Introduction Iron and copper chelators have significant potential in the treatment of various diseases, ranging from metal overload conditions (repeated blood transfusions, Wilson disease) to cancer [1]. In order to define the ideal properties of chelators for potential therapeutic use, a brief overview of these illnesses and the clinically used chelators, together with their disadvantages, will be provided. Thalassemia is a common hematological disorder with high prevalence in the Mediterranean area, the Middle East, and many other Asian countries. As a consequence of migration, the prevalence of this disease in formerly non-prevalent countries is increasing [2]. Severe thalassemia requires blood transfusions followed by iron chelation therapy. The current palette of iron chelators includes potent deferoxamine, which needs frequent or continuous parenteral administration, and two oral iron chelators, deferiprone and deferasirox. Deferiprone is a somewhat controversial drug, which is able to induce idiosyncratic agranulocytosis [3,4]. Deferasirox seems to have a better safety profile, but it can cause peptic ulcers and severe hepatic and renal dysfunction in rare cases, and in addition, it might not be always sufficiently active [5]. Furthermore, the selectivity of the current clinically used chelators is being discussed [6]. For these reasons, expanding the selection of the currently available and ideally selective iron chelators is desired. Wilson disease is a copper disorder, associated with its accumulation in the human body. Its prevalence worldwide is lower than that of thalassemia (1 in 10,000/30,000 persons) [7]. The treatment modalities include two oral copper chelators, D-penicillamine and trientine, and zinc in milder cases or in combination regimens. D-penicillamine is a cumbersome drug with many side effects [8] and its copper-chelating effects seem to be weak [9]. Trientine has much better tolerance, but like D-penicillamine, it can worsen neurological symptoms. Currently, ammonium tetrathiomolybdate is being tested in clinical trials [10][11][12]. Hence, like in the case of iron chelators, novel selective copper chelators are also needed. It is also well known that both metals are needed for cancer growth and, unsurprisingly, chelators have been tested as anti-cancer compounds [13,14]. In the case of tumors, however, rather than depriving tumors of these metals via strong chelators, the redox cycling of the metals by reducing chelators associated with increased production of reactive oxygen species (ROS) is more convenient. It is well known from the literature that some chelators can reduce these metals, and since cancer cells are abundant in copper and iron, the effect of these redox cycling chelators is targeted, in particular, against tumor [9,[15][16][17]. Some xanthone derivatives, which are developed as fluorescent metal probes, were previously reported to form colored complexes with several metals [18][19][20][21][22]. We also reported that the series of xanthones reported herein behaved as potent cytotoxic drugs against cervical, colorectal, hepatocellular and alveolar adenocarcinoma cell lines, with IC 50 values mostly in units or tens of µM concentrations [23]. In general, the xanthone scaffold seems to be a suitable base for the development of novel anticancer compounds [24,25]. For this reason, a xanthene core with chelating site(s) was selected as a scaffold for novel chelators. In this study, a series of 2,6,7-trihydroxy-xanthene-3-ones ( Figure 1) was tested for their ability to chelate and reduce iron or copper. Since this study aimed at the selection of the most potent chelating compound from this series, and in vitro assessment of its ability to treat metal overload or cancer, the most efficient xanthone was also tested in biological experiments using healthy and cancer cells. Specifically, a strong chelator designed for iron overload diseases should protect erythrocytes against the damaging effects of copper/iron, while a chelator suitable for cancer treatment should increase the toxicity of the metals in cancer cells. For these reasons, the effect of the most active xanthone was tested on both erythrocytes and breast cancer cell lines together with copper or iron ions. Metal Interaction Experiments Stock solutions of cupric ions (cupric sulfate pentahydrate, CuSO4·5H2O), ferric ions (ferric chloride hexahydrate, FeCl3·6H2O) and ferrous ions (ferrous sulfate heptahydrate, FeSO4·7H2O) were prepared in water (Milli-Q RG, Merck Millipore, MA, USA), while that of cuprous ions (cuprous chloride, CuCl) was prepared in an aqueous solution of 0.1 M HCl and 1 M NaCl. The corresponding fresh working solutions (0.25 mM) were prepared by dilution either in DMSO (BCS method) or distilled water (hematoxylin and ferrozine method). Hydroxylamine hydrochloride, ferrozine, and bathocuproinedisulfonic acid disodium salt (BCS) were dissolved in distilled water. Hematoxylin was dissolved in DMSO and its working solution (0.25 mM) was usable for no longer than 90 min. All tested xanthones were dissolved in DMSO. Experiments were performed in the following buffers: acetate (pH 4.5 and 5.5; 15 mM sodium acetate salt, and 27.3 and 2.7 mM acetic acid, respectively) and HEPES (pH 6.8 and 7.5; 15 mM sodium HEPES, and 71.7 and 14.3 mM HEPES, respectively). DMSO was purchased from Avantor Performance Material, (VWR International s.r.o., Stříbrná Skalice, Czech Republic), while all other chemicals were purchased from Sigma-Aldrich (St. Louis, MO, USA). Iron/Copper Chelation and Reduction Assessment These methods are based on the reaction of free metals with an indicator, and the distinct absorption of the resultant complexes in the visible spectra. The indicators are selective for one Metal Interaction Experiments Stock solutions of cupric ions (cupric sulfate pentahydrate, CuSO 4 ·5H 2 O), ferric ions (ferric chloride hexahydrate, FeCl 3 ·6H 2 O) and ferrous ions (ferrous sulfate heptahydrate, FeSO 4 ·7H 2 O) were prepared in water (Milli-Q RG, Merck Millipore, MA, USA), while that of cuprous ions (cuprous chloride, CuCl) was prepared in an aqueous solution of 0.1 M HCl and 1 M NaCl. The corresponding fresh working solutions (0.25 mM) were prepared by dilution either in DMSO (BCS method) or distilled water (hematoxylin and ferrozine method). Hydroxylamine hydrochloride, ferrozine, and bathocuproinedisulfonic acid disodium salt (BCS) were dissolved in distilled water. Hematoxylin was dissolved in DMSO and its working solution (0.25 mM) was usable for no longer than 90 min. All tested xanthones were dissolved in DMSO. Experiments were performed in the following buffers: acetate (pH 4.5 and 5.5; 15 mM sodium acetate salt, and 27.3 and 2.7 mM acetic acid, respectively) and HEPES (pH 6.8 and 7.5; 15 mM sodium HEPES, and 71.7 and 14.3 mM HEPES, respectively). DMSO was purchased from Avantor Performance Material, (VWR International s.r.o., Stříbrná Skalice, Czech Republic), while all other chemicals were purchased from Sigma-Aldrich (St. Louis, MO, USA). Iron/Copper Chelation and Reduction Assessment These methods are based on the reaction of free metals with an indicator, and the distinct absorption of the resultant complexes in the visible spectra. The indicators are selective for one oxidation state of Appl. Sci. 2020, 10, 4846 4 of 15 copper or iron. The same methodology can be used for both chelation and reduction when a suitable reductant-hydroxylamine-is added. The metal chelation experiments were performed in 96-well microplates, at least in duplicate, at room temperature. A Synergy HT Multi-Detection Microplate Reader (BioTec Instruments, Inc., Winooski, VT, USA) was used for taking these measurements. The detailed methodology was described in our original papers [9,29,30]. Ferrozine Method Principle: Ferrozine is a specific indicator forming a magenta-colored complex with ferrous ions. To evaluate iron chelation, solutions of the tested compounds (50 µL) dissolved in DMSO at various concentrations up to 10 mM were mixed in buffers with ferrous or ferric ions for 2 min (50 µL, 250 µM). Ferrozine (50 µL, 5 mM) was then added in the case of ferrous ions. Hydroxylamine (50 µL, 10 mM) was added prior to ferrozine at pH 7.5 to inhibit ferrous oxidation at this pH. This hydroxylamine solution was also used in the case of total iron chelation at pH 4.5 to reduce the remaining ferric to ferrous ions, which then later formed the previously mentioned complex with ferrozine. Absorbance was measured immediately after the addition of ferrozine and 5 min later at 562 nm. To determine the degree of ferric ion reduction, various concentrations of the tested compounds were mixed for 2 min with ferric ions in buffers. Afterwards, ferrozine was added and absorbance was measured both immediately and 5 min later. Hydroxylamine was used as the positive control (100% reduction). Hematoxylin Method Principle: Hematoxylin forms a complex with cupric ions. Different concentrations of each tested compound were mixed with cupric ions (50 µL, 250 µM) for 2 min in the presence of a buffer. The mixture was incubated for the next 3 min with the hematoxylin indicator (50 µL, 250 µM) in order for the reaction between the non-chelated copper ions and the indicator to occur. Absorbance was measured at this time and again after another 4 min. Different wavelengths were used according to pH: 595 nm (pH 5.5), 590 nm (pH 6.8) and 610 nm (pH 7.5), as reported in our aforementioned paper [9]. BCS Method Principle: The BCS method is analogous to the ferrozine method with the exception that BCS is specific to cuprous ions. Different concentrations of each tested compound in DMSO up to 10 mM were mixed with cupric or cuprous ions (50 µL, 250 µM) and incubated for 2 min in a buffer. In the case of cupric ions, hydroxylamine (50 µL, varied final concentrations according to pH: 1 mM at pH 6.8 and 7.5 while 10 mM at pH 4.5 and 5.5) was added after mixing in order to reduce the non-chelated cupric ions. In the case of cuprous ions, hydroxylamine was added before the copper solution in order to retain copper in its reduced state. Non-chelated copper was then evidenced in both cases by the BCS indicator (50 µL, 5 mM) and absorbance was read immediately and again after 5 min at 484 nm. The modified BCS method was used for the determination of cupric ion-reducing potential. Cupric ions were mixed with a tested substance in a buffer without any hydroxylamine for 2 min. The reduced copper ions were later evidenced by BCS. Hydroxylamine was used as the positive control (100% copper reduction). Cancer Cell Viability Assay CellTiter 96 ® aqueous non-radioactive cell proliferation assay (Promega, Madison, WI, USA) was performed to evaluate the in vitro effects of Fe 2+ , Cu 2+ and 4 -trifluoromethyl derivate (9) in breast adenocarcinoma MCF7/S0.5 (parental MCF7 cells adapted to a low-sera environment) and MCF7/182R-6 (derived from MCF7, resistant to the antiestrogen fulvestrant) cell lines. The employed method uses the bioreduction of tetrazolium salt of MTS (3-(4,5-dimethylthiazol-2-yl)-5-(3-carboxymethoxyphenyl)-2-(4-sulfophenyl)-2H-tetrazolium) into a colored formazan with an absorbance peak at wavelength 490 nm, which took place only in viable cells via mitochondrial metabolism. Experiments were conducted in accordance with the manufacturer guidelines. The cells were treated with the tested compounds at different concentrations (1 nM to 750 µM), metals (1 nM to 750 µM) or vehicle (DMSO 0.1%) for 48 h in DMEM/F-12 without phenol red media in 96-well plates. Alternatively, xanthone derivate 9 was pre-incubated with the metals prior to being added to cell culture plates and incubated for 48 h. In these experiments, xanthone concentration was maintained constant (500 µM) while the metals were tested over a range of concentrations (1 nM to 750 µM) and vice versa. At the end of the treatment, 20 µL of MTS reagent was added to each well and incubated for a further 3 h. After this incubation period, absorbance at 490 nm was measured using a plate reader (Tecan, Mannedorf, Switzerland). The results are expressed as the relative cell viability, considering vehicle (DMSO 0.1%) and toxic control (SDS 10%) as 100% and 0% of the response, respectively. All experiments were performed in triplicates and repeated at least three times. Erythrocyte Lysis Assay This assay was performed according to previous studies with some modifications [31,32]. Blood samples were obtained from adult rats (Wistar Han, Velaz, s.r.o., Czech Republic) by exsanguination into heparinized tubes. The exsanguination was performed by a trained researcher in accordance with The Guide for the Care and Use of Laboratory Animals published by the US National Institutes of Health (8th edition, revised 2011, ISBN-13: 978-0-309-15400-0). The blood was used as a by-product from rats after isolation of aorta, aimed at testing the vasodilatory effect (approval by the Czech Ministry of the Health No. MSMT-34121_2017-2). Afterwards, the blood was centrifuged at 3000g and plasma was removed. The erythrocyte fraction was purified by adding saline and further centrifugation. Afterwards, additional heparin (final concentration of 10 IU/mL in erythrocyte suspension) was added, and this mixture was diluted 10 times with 1 mM glucose solution in saline. Erythrocyte suspension obtained was then used for erythrocyte toxicity evaluation. 940 µL of this suspension was incubated with 10 µL of compound No. 9 dissolved in DMSO (different concentrations, final concentration of DMSO was 1%) and 50 µL of a metal solution (cupric or ferrous sulfate dissolved in saline, at final concentrations of 500 and 2500 µM, respectively) for 4 h at 37 • C. The sample was then centrifuged at 7000 g for 10 min and 250 µL of supernatant was used for the determination of lactate dehydrogenase (LDH) activity, the marker related to metal-induced erythrocyte lysis. The remaining liquid was discarded and a lysis buffer (2 mM EDTA, 1 mM dithiotreitol, 1% Triton X, 0.1 M phosphate buffer of pH 7.8) was added to the sediment in the same quantity as the removed supernatant. After 20 min of incubation at room temperature, the samples were treated similarly as the previous supernatant and determined LDH activity was considered to be the marker of the remaining, non-lysed erythrocytes. The protocol used for LDH evaluation was adapted from Chan et al. [32] with minor modifications, erythrocyte suspension was used instead of cell culture and β-NAD conversion was used to quantify the enzymatic activity. Results were calculated as a percentage of erythrocytes lysed and compared with the positive control, where the solvent DMSO was used instead of the tested compound. Negative control was not treated with the metal, but otherwise elaborated as other samples. Theoretical Calculation The energy of compound 9 was minimized using the Chem3D software, which is a part of the ChemDraw package version 18.1 (PerkinElmer, Waltham, MA, USA). Mathematical And Statistical Analysis The amount of non-chelated or reduced iron/copper was calculated from the difference between the absorbance of the tested sample (with an indicator) and its corresponding blank (without any indicator) divided by the difference of the absorbance of the control sample (the known amount of metal without the tested substance) and its control blank. Data are expressed as mean ± SD. The differences of chelation potencies for both iron and copper were checked by 95% prediction (confidence) intervals of chelation curves. The differences between copper reductions caused by xanthones were assessed by 95% confidence intervals of the linear regression lines. Difference in cell lysis were tested by Student's t-test. IC 50 values were obtained using Hill's equation by nonlinear regression analysis from at least 7-point curves performed in triplicate. For all statistical approaches, GraphPad Prism version 6 for Windows (GraphPad Software, San Diego, CA, USA) was used. Results Firstly, the iron-chelating properties of all the tested compounds were determined under different pH conditions. In general, the iron chelation properties did not vary substantially among the compounds, and became weaker with a lowering of the pH (major representatives are shown in Figure 2, other compounds in Figure S1). At pH 7.5, the chelation activity of the majority of the compounds likely corresponded with the chelation stoichiometry of 1:1 (roughly 100% iron was chelated at pH 7.5 at the ratio of 1:1, see e.g., Figure 2B,C) with the exception of 2 -hydroxy-3-methoxy derivative (2, Figure S2A), which apparently formed complexes with ferrous ions at the stoichiometric ratio of 2:1 (see Figure 2E; at the ratio of 1:1, about 50% of the iron was chelated). At pH 6.8, the 4 -dimethylamino derivative (8) and 4 -trifluoromethyl derivative (9) were the most efficient, while other compounds had lower effects than these two congeners ( Figure S2B). At pH 5.5, the chelating effects of all tested compounds were similar, and it appears that all formed complexes had a stoichiometry of 2:1, compound to iron. The only exception was 8, which was clearly less efficient ( Figure S2C). At pH 4.5, again, there were generally no differences between ferrous chelation, with the exception of 9, which was the most efficient ( Figure S2D). In the case of ferric chelation at the same pH, there were, however, apparently three subgroups. The largest group containing the majority of the compounds comprised the most effective xanthones with a probable stoichiometric ratio of 2:1. A lower effect was observed in the 2 -hydroxy-3 -methoxy, 4 -ethoxy and 3 -bromo derivatives, and 2 -chloro-6 -fluoro (2, 7, 11 and 12). The 4 -dimethylamino congener (8, Figure S2E) was even less efficient. A comparison of the chelation effect of all tested compounds is summarized in Figure 3. All tested compounds were able to reduce iron at pH 4.5 and partly at pH 5.5. None of them reduced iron at pH 6.8 or 7.5. The reduction curves were bell-shaped in all cases. With the exception of the 4 -dimethylamino derivative, the reducing properties were observed only at low ratios, and were negligible or were abruptly attenuated at ratios higher than 1:1. The maximal reduction reached 50-70% of added iron in most cases. However, the 4 -hydroxy-3 -methoxy-5 -nitro derivative (6) and the acetylamino (10) derivatives reached only approximately 30%. Representative compounds are shown in Figure 4, and all other compounds in Figure S3. The screening of copper chelation activity with hematoxylin showed that all compounds were able to chelate cupric ions at pH 5.5-7.5 approximately, with the same potency ( Figure S4). However, under more competitive conditions, their chelation activity towards both cupric and cuprous ions was negligible ( Figures S5 and S6), suggesting a low affinity for copper. Similar to iron, all compounds were able to reduce cupric ions. In contrast to iron, all xanthones were able to reduce 100% of the added copper, and their reduction properties did not decrease at higher ratios. There was significant difference in the efficacy of cupric ion reductions at lower ratios. Comparison was performed via 95% confidence intervals of reduction lines (see an example in the Supplementary Figure S7). The most efficient compound was compound 9 ( Figure 5A) followed by derivative 8, which was less efficient at pH 4.5, 5.5 and 6.8. The 3 ,4 -dihydroxy derivative (3) was less efficient at pH 5.5 and 7.5 when compared with the 4 -dimethylamino derivative (8). There were no differences between the 3 ,4 -dihydroxy derivative (3) and the 4 -hydroxy-3 ,5 -dimethoxy (5) derivative, which were followed by the less active 4 -ethoxy derivative (7) and 3 -bromo (11) congeners. The acetamino derivative (10) had the same activity as the previous two compounds, with the exception of pH 4.5, where it was less efficient. The least efficient compound was the 2 -hydroxy-3 -methoxy derivative (2, Figure 5B). In the case of the 4 -hydroxy-3 -methoxy-5 -nitro (6) and 2 -chloro-6 -fluoro derivatives (12), the relationship to other compounds was more complicated, particularly in the case of the latter, where the reduction was markedly dependent on pH. The results of the copper reduction experiments are summarized in Figure 6. Similar to iron, all compounds were able to reduce cupric ions. In contrast to iron, all xanthones were able to reduce 100% of the added copper, and their reduction properties did not decrease at higher ratios. There was significant difference in the efficacy of cupric ion reductions at lower ratios. Comparison was performed via 95% confidence intervals of reduction lines (see an example in the Supplementary Figure S7). The most efficient compound was compound 9 ( Figure 5A) followed by derivative 8, which was less efficient at pH 4.5, 5.5 and 6.8. The 3′,4′-dihydroxy derivative (3) was less efficient at pH 5.5 and 7.5 when compared with the 4′-dimethylamino derivative (8). There were no Similar to iron, all compounds were able to reduce cupric ions. In contrast to iron, all xanthones were able to reduce 100% of the added copper, and their reduction properties did not decrease at higher ratios. There was significant difference in the efficacy of cupric ion reductions at lower ratios. Comparison was performed via 95% confidence intervals of reduction lines (see an example in the Supplementary Figure S7). The most efficient compound was compound 9 ( Figure 5A) followed by derivative 8, which was less efficient at pH 4.5, 5.5 and 6.8. The 3′,4′-dihydroxy derivative (3) was less efficient at pH 5.5 and 7.5 when compared with the 4′-dimethylamino derivative (8). There were no congeners. The acetamino derivative (10) had the same activity as the previous two compounds, with the exception of pH 4.5, where it was less efficient. The least efficient compound was the 2′-hydroxy-3′-methoxy derivative (2, Figure 5B). In the case of the 4′-hydroxy-3′-methoxy-5′-nitro (6) and 2′chloro-6′-fluoro derivatives (12), the relationship to other compounds was more complicated, particularly in the case of the latter, where the reduction was markedly dependent on pH. The results of the copper reduction experiments are summarized in Figure 6. As a result of their ability to reduce both metals (see likely scenarios in the Supplementary Data, Figure S8), the xanthones can potentially promote iron-and/or copper-based Fenton reaction and, consequently, increase the cytotoxic effects of these metals. To study the potentially cytotoxic effects toward cancer cells, the most active chelator, the 4′-trifluoromethyl derivate (9), was selected and tested on the adenoma breast cancer cell line (MCF7/S0.5) and fulvestrant-resistant-derived cell line congeners. The acetamino derivative (10) had the same activity as the previous two compounds, with the exception of pH 4.5, where it was less efficient. The least efficient compound was the 2′-hydroxy-3′-methoxy derivative (2, Figure 5B). In the case of the 4′-hydroxy-3′-methoxy-5′-nitro (6) and 2′chloro-6′-fluoro derivatives (12), the relationship to other compounds was more complicated, particularly in the case of the latter, where the reduction was markedly dependent on pH. The results of the copper reduction experiments are summarized in Figure 6. As a result of their ability to reduce both metals (see likely scenarios in the Supplementary Data, Figure S8), the xanthones can potentially promote iron-and/or copper-based Fenton reaction and, consequently, increase the cytotoxic effects of these metals. To study the potentially cytotoxic effects toward cancer cells, the most active chelator, the 4′-trifluoromethyl derivate (9), was selected and tested on the adenoma breast cancer cell line (MCF7/S0.5) and fulvestrant-resistant-derived cell line Figure 6. Summary of the differences in copper reduction among the tested compounds. The direction of the arrows shows more active compounds. The numbers represent pH at which the differences were observed. As a result of their ability to reduce both metals (see likely scenarios in the Supplementary Data, Figure S8), the xanthones can potentially promote iron-and/or copper-based Fenton reaction and, consequently, increase the cytotoxic effects of these metals. To study the potentially cytotoxic effects toward cancer cells, the most active chelator, the 4 -trifluoromethyl derivate (9), was selected and tested on the adenoma breast cancer cell line (MCF7/S0.5) and fulvestrant-resistant-derived cell line (MCF7/182R-6). We selected this couple of cell lines in order to investigate if the activity of this compound is affected by this specific type of resistance, which is a common consequence observed in cancer. Metals and the xanthone derivative were tested over a range of concentrations, alone and also in combination, and the results were compared to the vehicle control (DMSO 0.1%) assigned to 100% viability. The results showed that although Fe 2+ is, as expected, not toxic (IC 50 = 2611 µM), the 4 -trifluoromethyl xanthone derivate possessed toxicity similar to that of Cu 2+ ; the IC 50s were 218 and 300 µM, respectively (Table 1). Next, we selected the concentration of 500 µM of copper, which was strongly cytotoxic itself, and tested different concentrations of the chelator in order to see the cytotoxicity relationship in relation to the chelator:metal ratio. As shown (Figure 7), the copper cytotoxicity was apparently decreased by increasing concentrations of the 4 -trifluoromethyl derivate. The same experiment was also performed with iron. The addition of iron markedly protected cells against chelator toxicity (Figure 7). The same experiments were also performed with MCF7/182R-6 cells. Here, the IC 50 were, as expected, slightly higher than those for MCF7/S0.5;. The pattern of results was similar, but the protection against copper was very mild (Supplementary data Figure S9). Appl. Sci. 2020, 10, x FOR PEER REVIEW 10 of 16 (MCF7/182R-6). We selected this couple of cell lines in order to investigate if the activity of this compound is affected by this specific type of resistance, which is a common consequence observed in cancer. Metals and the xanthone derivative were tested over a range of concentrations, alone and also in combination, and the results were compared to the vehicle control (DMSO 0.1%) assigned to 100% viability. The results showed that although Fe 2+ is, as expected, not toxic (IC50 = 2611 µM), the 4′-trifluoromethyl xanthone derivate possessed toxicity similar to that of Cu 2+ ; the IC50s were 218 and 300 µM, respectively (Table 1). Next, we selected the concentration of 500 µM of copper, which was strongly cytotoxic itself, and tested different concentrations of the chelator in order to see the cytotoxicity relationship in relation to the chelator:metal ratio. As shown (Figure 7), the copper cytotoxicity was apparently decreased by increasing concentrations of the 4′-trifluoromethyl derivate. The same experiment was also performed with iron. The addition of iron markedly protected cells against chelator toxicity (Figure 7). The same experiments were also performed with MCF7/182R-6 cells. Here, the IC50 were, as expected, slightly higher than those for MCF7/S0.5;. The pattern of results was similar, but the protection against copper was very mild (Supplementary data Figure S9). Following up on the above results, the effects of the most active chelator were also evaluated on rat erythrocytes ex vivo, in order to assess whether the xanthones could protect healthy cells against the toxic effect of the metals. It is well known that copper can cause hemolysis due to its effect on erythrocytes. This undesirable effect can be suppressed by copper chelators [31], while iron is relatively non-toxic. Our previous experiments showed that the percentage of erythrocyte lysis suffers quite significantly from interindividual variability. Hence, we used a higher concentration of copper in order to (at least partly) diminish the variability. 500 µM of cupric ions induced lysis in different blood samples in the range of 28% to 51% of total red blood cells in the suspension. Regardless of that, the observed protective effects of the tested chelator (when the arithmetic difference between the percentage of the individual chelator sample and the positive blank was used) were homogenous, and in line with the cancer experiments. Substance 9 decreased the hemolysis by 15-20%, depending on the chelator to copper ratio ( Figure 8A). Since iron was non-toxic, we used a very high concentration of 2500 µM. However, this concentration induced a small total lysis of about 1-8%, and this was not different from the negative control without iron. Interestingly, the tested chelator increased iron toxicity by about 10% (Figure 8B). It should be emphasized that the chelator itself without metals did not evoke any red blood cell toxicity, even at the highest tested concentration of 500 µM. Following up on the above results, the effects of the most active chelator were also evaluated on rat erythrocytes ex vivo, in order to assess whether the xanthones could protect healthy cells against the toxic effect of the metals. It is well known that copper can cause hemolysis due to its effect on erythrocytes. This undesirable effect can be suppressed by copper chelators [31], while iron is relatively non-toxic. Our previous experiments showed that the percentage of erythrocyte lysis suffers quite significantly from interindividual variability. Hence, we used a higher concentration of copper in order to (at least partly) diminish the variability. 500 µM of cupric ions induced lysis in different blood samples in the range of 28% to 51% of total red blood cells in the suspension. Regardless of that, the observed protective effects of the tested chelator (when the arithmetic difference between the percentage of the individual chelator sample and the positive blank was used) were homogenous, and in line with the cancer experiments. Substance 9 decreased the hemolysis by 15-20%, depending on the chelator to copper ratio ( Figure 8A). Since iron was non-toxic, we used a very high concentration of 2500 µM. However, this concentration induced a small total lysis of about 1-8%, and this was not different from the negative control without iron. Interestingly, the tested chelator increased iron toxicity by about 10% (Figure 8B). It should be emphasized that the chelator itself without metals did not evoke any red blood cell toxicity, even at the highest tested concentration of 500 µM. Viability was determined by cell viability assay 48 h after treatment in MCF7/S0.5 cells. Vehicletreated cells were set as 100% viability. Discussion The results of this study confirmed our initial assumption that 2,6,7-trihydroxy-xanthene-3-ones can form complexes with iron and copper. Based on the comparison between hematoxylin (mild competitive conditions, wherein the final concentration of the indicator was equimolar to that of copper) and BCS (the indicator was given in 20-fold excess), it is apparent that these xanthones can also chelate copper. In contrast to iron, however, the complexes are not stable. In the latter case, the iron complexes were stable under 20-fold excess of the indicator ferrozine. The 2-hydroxy-3-one fragment is probably responsible for the chelation, due the higher electron density of the oxo group. Interestingly, the 6,7-dihydroxy fragment and the mentioned 2-hydroxy-3-one site were clearly not able to bind iron or copper simultaneously, since the proposed complexes had a stoichiometry of 1:1 or 2:1, xanthone:iron. Further, the functional groups on the benzene ring at position 9 are not involved, since the 3′,4′-dihydroxyderivative (3) was not more potent than the other compounds without this additional chelation site. This outcome was rather surprising since, for example, Viability was determined by cell viability assay 48 h after treatment in MCF7/S0.5 cells. Vehicle-treated cells were set as 100% viability. Discussion The results of this study confirmed our initial assumption that 2,6,7-trihydroxy-xanthene-3-ones can form complexes with iron and copper. Based on the comparison between hematoxylin (mild competitive conditions, wherein the final concentration of the indicator was equimolar to that of copper) and BCS (the indicator was given in 20-fold excess), it is apparent that these xanthones can also chelate copper. In contrast to iron, however, the complexes are not stable. In the latter case, the iron complexes were stable under 20-fold excess of the indicator ferrozine. The 2-hydroxy-3-one fragment is probably responsible for the chelation, due the higher electron density of the oxo group. Interestingly, the 6,7-dihydroxy fragment and the mentioned 2-hydroxy-3-one site were clearly not able to bind iron or copper simultaneously, since the proposed complexes had a stoichiometry of 1:1 or 2:1, xanthone:iron. Further, the functional groups on the benzene ring at position 9 are not involved, since the 3 ,4 -dihydroxyderivative (3) was not more potent than the other compounds without this additional chelation site. This outcome was rather surprising since, for example, flavonoids possessing a benzene ring with a catechol substitution can bind more metals than those without this chelation site [9,33]. The differences in iron chelation were apparently minor, and seemed to be more related to the possible influence of the substituents on entropic factors (solvation) rather than on the electron density of the xanthene core. In particular, the trifluoromethyl derivative 9 was the most potent chelator. On the other hand, the 2 -hydroxy-3 -methoxy (2) and 2 -chloro-6 -fluoro derivatives (12) decreased chelation. The metal-reducing effect was markedly dependent on the substituent as well. As regards the influence of the substitution taking place on the C(9) phenyl ring on iron chelation and Cu 2+ reduction, clear-cut relationships are impossible to find. Intuitive chemical reasoning supported by simple MM2 force field calculations (part 2.3, Figure 9) leads to the conclusion that the C(9) phenyl moiety cannot be coplanar with the xanthene scaffold, due to the unfavorable interaction of the C(2 ) and C(6 ) substituents (including hydrogen) with the xanthene hydrogens at C(1) and C (8). As a consequence, neither the C(9) phenyl nor its substituents can communicate with the xanthene moiety through resonance effects, but only through field effects. Furthermore, since the α-hydroxy ketone fragment is responsible for chelation, it is reasonable to assume that only the field effect of the substituents at C(2 ) and C(6 ) (i.e., the closest ones to the chelation site) may have some influence on metal chelation and reduction. effect of the substituents at C(2′) and C(6′) (i.e., the closest ones to the chelation site) may have some influence on metal chelation and reduction. For iron chelation, a few notable relationships can be observed. Most interestingly, the 4′dimethylamino phenyl derivative 8 appears to be the least effective chelator at low pH values. Similarly, the ortho-substituted phenols 2 and 12 are among the worst chelating compounds under most pH values, with the exception of pH 5.5, at which 2′-chloro-6′-fluorophenyl xanthone 12 became a better chelator than the 4′-dimethylamino derivative 8. Thus, we can conclude that the low efficiency of the latter most likely resulted from the protonation of the dimethylamino group under acidic conditions, which boosted its electron-withdrawing properties through charge-formation [ + N(CH3)2]. Next, both phenol 2 and phenol 12 also bear substituents with a withdrawing (-I) effect (OH, OCH3 or F, Cl, respectively). In sharp contrast, the 4′-trifluoromethyl phenyl derivative 9 (i.e., strongly electron-withdrawing again) is among the most efficient chelating agents under the majority of pH conditions studied. This discrepancy, together with the assumption that chelate formation should generally be supported by electron-rich ligands, strongly suggests that the electronic properties of the substituents on the C(9) phenyl have no direct impact on iron chelation. Their influence on other, no less important factors, such as the solvation of the resultant iron complexes, is probably of much higher significance. For iron chelation, a few notable relationships can be observed. Most interestingly, the 4 -dimethylamino phenyl derivative 8 appears to be the least effective chelator at low pH values. Similarly, the ortho-substituted phenols 2 and 12 are among the worst chelating compounds under most pH values, with the exception of pH 5.5, at which 2 -chloro-6 -fluorophenyl xanthone 12 became a better chelator than the 4 -dimethylamino derivative 8. Thus, we can conclude that the low efficiency of the latter most likely resulted from the protonation of the dimethylamino group under acidic conditions, which boosted its electron-withdrawing properties through charge-formation [ + N(CH 3 ) 2 ]. Next, both phenol 2 and phenol 12 also bear substituents with a withdrawing (-I) effect (OH, OCH 3 or F, Cl, respectively). In sharp contrast, the 4 -trifluoromethyl phenyl derivative 9 (i.e., strongly electron-withdrawing again) is among the most efficient chelating agents under the majority of pH conditions studied. This discrepancy, together with the assumption that chelate formation should generally be supported by electron-rich ligands, strongly suggests that the electronic properties of the substituents on the C(9) phenyl have no direct impact on iron chelation. Their influence on other, no less important factors, such as the solvation of the resultant iron complexes, is probably of much higher significance. Similar to chelation, copper reduction with concomitant oxidation of the xanthones to o-quinones should also be facilitated by electron-rich ligands. In other words, electron-donating substituents should facilitate an electron transfer to Cu 2+ . However, no straightforward relationship between the electronic properties of the substituents and the ability of the respective xanthone to reduce Cu 2+ can be found, either. For example, the 4 -trifluoromethyl phenyl derivative 9 is the most active compound at pH 4.5-6.8, while the 2 -chloro-6 -fluoro phenyl compound 12 is the least effective at neutral and mildly basic pH values (6.8 and 7.5). Another C(2 )-substituted phenol, substance 2, appears to be among the least active compounds as well. Of note, all the 2, 9 and 12 compounds have substituents with -I (withdrawing) effects. Therefore, factors (solvation) other than the electronic effects of the substituents are more important in copper (II) reduction. In the second part of this study, we selected the most potent compound 9 and tested its suitability for possible future in vivo studies. Our initial assumption, based on the above results, was that iron-and copper-reducing effects can be convenient for a chelator designed for cancer treatment, rather than for metal overload conditions. This speculation seemed to be supported by two arguments: (1) tumors are formed by rapidly growing cells, which require large intakes of iron/copper, and hence are also more susceptible to the iron/copper-based production of ROS via the Fenton chemistry. Increased ROS production with subsequent cellular death can be facilitated by weak chelators, which reduce iron/copper [15,17,34]. Indeed, out of our compounds, derivative 9 in particular was the most potent copper reductant in this study. (2) In addition to their metal-chelating effects, this series of xanthones possessed antiplatelet effects [28], and, according to novel investigations, activated platelets increase cancer cell survival and contribute to metastasis [35,36]. Indeed, our group has recently demonstrated the solid potential of these compounds to destroy cancer cells in a variety of tumor cell lines. In particular, the most active metal-chelating and -reducing compound from this study-the trifluoromethyl derivative (9)-was also the most potent antiproliferative compound [23]. In order to expand this knowledge, we selected a breast cancer line together with the corresponding resistant alternative, since we had not included such a line in our previous experiments. Very surprisingly, 9 was about 30-200 times less potent in the selected cancer cell line, compared to its effect on cervical, colorectal, hepatocellular and alveolar adenocarcinomas. The difference between the fulvestrant-sensitive and -resistant cancer lines was relatively low (about twofold). Although the explanation for such a huge discrepancy is unclear, our experiments demonstrated that our theoretical assumption was not correct. While the cytotoxicity of the most active compound (9) was observed, the same compound was able to eliminate the cytotoxicity of copper against erythrocytes and conventional cancer cells. The protective effect against Cu-toxicity on resistant cancer cells was lower. This can be regarded as a truly new and unexpected result, suggesting that the metal-reducing activity does not necessarily need to be directly translated into cytotoxicity. Moreover, considering that 500 µM of this compound did not cause any increase in red blood cell lysis under normal conditions and its IC 50 in relation to cancer cells was more than 250 µM, the toxicity of this compound under normal conditions seems to be very low. The only questionable issue remains whether iron reduction can stimulate iron toxicity in the red blood cells, but we observed this phenomenon only when using a very excessive concentration of iron (2.5 mM), and such a strong overload is more hypothetical than realistic. Conclusions The 2,6,7-trihydroxyxanthene-3-ones tested are iron chelators with reversible copper-chelating properties. The compounds also reduce iron and copper, and hence were previously assumed to have pronounced cytotoxic effects, thereby being unsuitable for use under metal overload conditions. This assumption was disproved in this study since, on the contrary, the most active compound, 4 -trifluoromethyl derivative 9, protected both erythrocytes and cancer cells against copper toxicity. Because the ex vivo toxicity of xanthone 9 is low, the compound can be regarded as a suitable lead for the development of protecting agents for potential use under metal overload conditions. Since its effect on the cancer cells seems to be highly dependent on the selected lines, potential use is likely to be limited to some types of cancers. Supplementary Materials: There are Supplementary data to this article with additional results. The following are available online at http://www.mdpi.com/2076-3417/10/14/4846/s1, Figure S1: Iron chelation by other compounds not shown in Figure 1, Figure S2: Comparison of iron chelation of representative compounds, Figure S3: Iron reduction-other compounds not shown in Figure 3, Figure S4: Comparison of cupric chelation (the hematoxylin assay), Figure S5: Chelation of copper ions assessed by the BCS method. Figure S6: Chelation of cuprous and cupric ions by 2'-chloro-6'-fluoro derivative (12) assessed by the BCS method, Figure S7: Example of comparisons of copper reduction lines with 95% confidence intervals, Figure S8: Likely scenarios of the interaction of Fe 3+ and Cu 2+ with tested xanthones, Figure S9: Cytotoxicity of 4 -trifluoromethyl derivate (9) alone and in combination with Cu 2+ or Fe 2+ at concentration 500 µM, Table S1: Cytotoxicity of Cu 2+ , Fe 2+ and 4 -trifluoromethyl derivate (9)
2020-07-16T09:04:22.517Z
2020-07-15T00:00:00.000
{ "year": 2020, "sha1": "4839448ab9ea7b9ca375e6155a951126addd1e63", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/10/14/4846/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "2aab5bbce87f8057c2c6389c5ad497b347120ef1", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
52846937
pes2o/s2orc
v3-fos-license
Reply to “Comment on ‘Glass Transition, Crystallization of Glass-Forming Melts, and Entropy”’ by Zanotto and Mauro A response is given to a comment of Zanotto and Mauro on our paper published in Entropy 20, 103 (2018). Our arguments presented in this paper are widely ignored by them, and no new considerations are outlined in the comment, which would require a revision of our conclusions. For this reason, we restrict ourselves here to a brief response, supplementing it by some additional arguments in favor of our point of view not included in our above-cited paper. Introduction The main part of our paper [1] and the comment on it [2] are concerned with the questions: (i) whether continuous relaxation has to be included in the definition of glass; (ii) whether glasses always crystallize; finally (as suggested by Zanotto and Mauro in [3]), (iii) how kinetic criteria of glass transition can be formulated most appropriately; and (iv) whether glasses have a residual entropy or not. The differences between our and Zanotto and Mauro's points of view were described comprehensively in our paper [1]. Therefore, we provide here a brief response and supplement it by additional arguments not included in [1]. Definition of Glass and the Glass Transition A minor part of our paper [1] was devoted to different definitions of the glass and the glass transition and the formulation of kinetic criteria determining it as the basis for the subsequent analysis. In this connection, it is worth reminding about the interpretation of the vitreous state and its relation to the metastable liquid, respectively, the crystal phase as developed by Simon. It is reproduced in Figure 1a-c adapted from the monograph by Gutzow and Schmelzer ([4], Figure 2.32). In brief, as formulated first by Simon, glasses are frozen-in non-equilibrium states (for more details, see the caption to Figure 1 and, e.g., [1,4]). The relaxation of a glass to the metastable equilibrium state and its further transformation to a crystal was supposed by Simon to be prevented, as a rule, for any reasonable time scales by kinetic reasons. Mechanical analogy for an interpretation of the differences between (a) the glass, (b) the metastable liquid and (c) the stable at T < T m crystalline state (T m is the melting or liquidus temperature). In this mechanical analogy, the crystalline state corresponds to an absolute minimum of the (thermodynamic) potential well, the under-cooled melt to a higher local minimum. In order to be transferred from the metastable to the stable crystalline state, the system has to overcome a potential barrier denoted in nucleation theory as the work of critical cluster formation. The current state of the glass is represented in this analogy by a ball glued to the wall of the potential well above the minimum (a). Crystallization, if it occurs, is frequently preceded by stabilization processes, i.e., the approach to the metastable equilibrium state of the liquid [4][5][6]. This is commonly taken as granted in the analysis of crystal nucleation in terms of classical nucleation theory [4,7]. The modifications one has to introduce if this is not the case are described in detail in our papers [8,9]. In (d), a modification of Simon's picture of the vitreous state is given accounting for the potential energy landscape picture of the evolution of glass-forming systems as advanced by Goldstein [10] (see the text). Zanotto and Mauro [3] claim that their "new modern ideas" consist of the statement that glasses always relax and finally crystallize. From a thermodynamic point of view, they do not go beyond Simon's model and the particular way of formation of glasses he was analyzing. New developments in glass sciences since the times of Simon are not reflected in the definition proposed by them. Indeed, Zanotto and Mauro [3] even pose the question whether Simon had already a similar point of view as theirs. In our paper [1], we reproduced in translation a respective statement by Simon showing that this is not the case. As noted by Davies and Jones [11]: "Simon pointed out that as a glass is cooled through its transformation temperature the molecular diffusion which is necessary to effect the appropriate change in configuration is increasingly inhibited and finally becomes practically impossible". The existence of long-time flow was known already since the 1850s and even earlier, as can be traced, for example, in the work of Kohlrausch reviewed in [12]. Nemilov and Johari [13] noted that James Prescott Joule had drawn the attention to such flow processes by measuring the zero degree Celsius point over a period of 38.5 years (from April 1844-December 1882). Numerous studies of the change in the density and refractive index of optical glass with time have been performed and published in the years from the early 1930s. Anyway, for most (not all) practical applications, flow and relaxation of glass are taken as irrelevant, and glass is treated as a solid. Zanotto and Mauro claimed that new developments in glass science require a new or modern definition of glass. However, really new developments are not accounted for in the definition proposed by Zanotto and Mauro, and several statements are simply incorrect, as discussed in [1]. In addition, one could try also really to advance Simon's picture, supplementing it by potential energy landscape ideas originally proposed by Martin Goldstein [10] (see Figure 1d) and their implementation accounting more appropriately for a combination of the general trends in the possible evolution of glasses formed via glass transition in cooling with details of the evolution of glass-forming melts, respectively glasses. As it seems to us, by such an approach, a variety of details (see, e.g., [14][15][16][17][18]) could be possibly given an interpretation not reflected in the original form of Simon's model. In such a more general approach, thermodynamic properties of deeply supercooled liquids are dominated by the local potential energy minima, while the kinetics of relaxation and transport is governed by transitions between the local minima as described in a review by Ediger and Harrowell [18]. Greek Philosophy and Kinetic Criteria of Glass Formation Reiner [19] introduced the Deborah number relying on Heraclitus statement that "Everything flows". His statement is cited in our paper [1] first to show that (i) since everything flows in historical time scales, it makes no sense to include such a feature into the definition of some particular state of matter. Moreover, (ii) we demonstrated that it is not the relation between experimental observation time, not specified by Zanotto and Mauro in [3], and structural relaxation time that leads to a glass formation in cooling or similar processes, but the interplay between the characteristic time of change of external control parameters (clearly defined by us via their rate of change and, for cooling and heating, the glass transition temperature) and relaxation time. As shown, all specific kinetic criteria proposed in the literature of glass-formation are special (approximate) expressions of the general criterion derived by us [1,4,20]. The Deborah number is introduced by Reiner to distinguish between liquids and solids and not liquids and glasses. It can be adapted to the glass transition, but this has to be done in a correct way as described by us [4,20]. Flow vs. Relaxation In our paper [1], it is demonstrated that flow and relaxation are interrelated. This correlation is expressed by the Maxwell relation [4] connecting the relaxation time with Newtonian viscosity. Zanotto and Gupta [21,22] used this relation to describe the change in the shape of window glass with time by gravitational flow. Consequently, any attempts to artificially distinguish both processes as independent are incorrect. Zanotto and Mauro [3] further mention the necessity for introducing a spectrum of relaxation times for describing the properties of glass-forming melts. This necessity is described by us in [1,4]. For the description of relaxation, we employ a relation of the form: here, the relaxation time, τ R , is a function of pressure, p, temperature, T, and, at least, one structural order-parameter, ξ. We showed in [8,9] that this dependence of the relaxation time on the structural order-parameters may give the key to the understanding of deviations from Maxwell's relaxation law like the stretched exponential relation. Hence, a solution of a long-standing problem [23] we proposed was how stretched-exponential relaxation can be understood from a theoretical point of view. We also discussed in detail why different quantities relax by different laws and that the dependence of the relaxation time on the structural order-parameter automatically yields a spectrum of relaxation times [24]. Temperature Dependence of the Viscosity Whether the viscosity diverges at low temperatures (as implied by the Vogel-Fulcher-Tammann (VFT) equation [4]) or not is a matter of debate [25][26][27]. This problem cannot be resolved by direct experimental investigations restricted to maximum values of viscosity η < 10 18 Pa·s. In case the predictions of VFT or similar relations hold true, the definition of glass proposed by Zanotto and Mauro is invalid not only for practical purposes, but also from a principal point of view. The advantages of the VFT-equation have been noted also in [28] by one of the authors of the Comment [3], claiming to have given there a statistical-mechanical derivation of another empirical model established experimentally by Waterton in 1932 [29]. As noted in [4], even earlier, this relation was proposed by le Chatelier. It was then widely employed by Schischakov for describing the temperature dependence of the viscosity. To denote the le Chatelier-Waterton-Schischakov equation as the MYEGA-equation we consider consequently as misleading. Having stressed in [28] the absence of a divergence of the viscosity at low temperatures as one of the advantages of the le Chatelier-Waterton-Schischakov equation, Mauro joins some years later a group of authors [30] stating the opposite: a divergence of viscosity and/or relaxation time does occur, and the temperatures of divergence of the relaxation time and the Kauzmann temperature (stated in contrast to [28] to exist in accordance also with a variety of other investigations (see [1,31,32])) coincide. At least for these 55 liquids, respectively, glasses analyzed in [30], there exist ranges of temperature and pressure, where (as noted above) relaxation and crystallization are principally excluded. Finally, in our discussions in [1], we focused attention on qualitative features and mentioned that the conclusions derived by us do not depend on any particular choice of the equation for describing the viscosity. That the viscosity does, in general, depend also on the degree of deviation from equilibrium is well known [4,9], but it is irrelevant for the purposes under consideration here. Crystallization That glasses may crystallize is not a matter of discussion; the question is whether glasses always finally crystallize or not. Several examples are provided in our paper [1] showing that this is not the case. This conclusion is confirmed by a recent computer simulation of crystallization and glass transition [33] and also by the "paradox of old glasses" as formulated by Berthier and Ediger [34] (glasses do not crystallize at normal conditions in relevant time scales). Moreover, some of the most frequently-used polymer glasses, namely atactic poly(methyl methacrylate), do not crystallize at all. For example, in a recent paper [35] entitled "The Ultimate Fate of Supercooled Liquids" Stephenson and Wolynes concluded that "some atactic polymers or heteropolymers may not be able to crystallize at all because they have no plausible competing periodic crystal structure, most everyday glass substances are only kinetically prevented from crystallizing on human time scales". In [4,36], Tammann's development method is discussed as a major tool in experimental analysis of crystallization. It had been developed by Tammann long ago and is widely employed in the analysis of crystal nucleation in glass-forming melts. The reason is that at the temperatures where crystals may nucleate, the nuclei frequently do not grow. Moreover, also Zanotto et al. have drawn attention to the fact that "very few silicate glasses show internal homogeneous nucleation" [37]. Broken Ergodicity and Entropy In [1], we concluded that glasses do have a residual entropy in agreement with the well-established point of view as advanced in the previous century. There is no need to wait for "the ultimate truth (that) must come from experiments" (as stated in [2]). Such a suggestion was already formulated about a decade ago [38]. A variety of such experiments do exist, and they are described in [1] and in the references cited therein supporting the traditional point of view. Previously claimed experimental proofs of their alternative point of view, like the one advanced in [39], are shown to be incorrect in [40]. We further illustrated our conclusions by a simple model based on statistical mechanical models and thermodynamics of irreversible processes. All essential features of the glass transition are reproduced by accounting for the increase of viscosity and/or relaxation time with decreasing temperature. In [1], we already discussed the paper by Goldstein [41] showing that a zero value of the residual entropy violates the second law of thermodynamics. However, even if such a consequence is accepted, the approach followed by Mauro et al. leads to internal inconsistencies as elaborated in detail by P. Gujrati [42]. In addition, a comprehensive analysis of theoretical aspects of the problems under consideration has been performed by Nemilov [43]) resulting in the conclusion: "If we rely upon the classical works of Gibbs, Planck, Einstein, Fermi, Prigogine, and other authors of modern physics, it is impossible to accept the limitations of the thermodynamic consideration of the vitreous state proposed by Gupta, Mauro and co-authors". Final Remarks Summarizing, the main part of our paper [1] and the comment on it [2] are concerned with the questions whether (i) the aspect of continuous relaxation has to be included into the definition of glass and (ii) whether glasses always crystallize, ultimately. We continue to follow the point of view (in line with the fathers of glass science (like Tammann, Simon, frequently referred to here and by many others)) that-since everything flows at large time scales-the first point is not a distinguishing feature that has to be included into the definition of a particular state of matter. Examples are given that for some glasses, relaxation and crystallization are completely excluded, so both are not general features that need to be included in the definition of glass. Finally, (iii) general kinetic criteria of glass transition can be formulated relying on the relation between characteristic times of change of external control parameters and relaxation time, and (iv) glasses do have a residual entropy, as established theoretically and experimentally by numerous outstanding scientists long ago.
2018-09-24T04:38:14.233Z
2018-09-01T00:00:00.000
{ "year": 2018, "sha1": "ecbe6ec2c5423acfdf136b8b1455a56f6d906494", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1099-4300/20/9/704/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ecbe6ec2c5423acfdf136b8b1455a56f6d906494", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Mathematics", "Computer Science", "Medicine" ] }
56386531
pes2o/s2orc
v3-fos-license
Recent Advances in Heat Transfer Enhancements : A Review Report Different heat transfer enhancers are reviewed. They are (a) fins and microfins, (b) porous media, (c) large particles suspensions, (d) nanofluids, (e) phase-change devices, (f) flexible seals, (g) flexible complex seals, (h) vortex generators, (i) protrusions, and (j) ultra high thermal conductivity composite materials. Most of heat transfer augmentation methods presented in the literature that assists fins and microfins in enhancing heat transfer are reviewed. Among these are using joint-fins, fin roots, fin networks, biconvections, permeable fins, porous fins, capsulated liquid metal fins, and helical microfins. It is found that not much agreement exists between works of the different authors regarding single phase heat transfer augmented with microfins. However, too many works having sufficient agreements have been done in the case of two phase heat transfer augmented with microfins. With respect to nanofluids, there are still many conflicts among the published works about both heat transfer enhancement levels and the corresponding mechanisms of augmentations. The reasons beyond these conflicts are reviewed. In addition, this paper describes flow and heat transfer in porous media as a well-modeled passive enhancement method. It is found that there are very few works which dealt with heat transfer enhancements using systems supported with flexible/flexible-complex seals. Eventually, many recent works related to passive augmentations of heat transfer using vortex generators, protrusions, and ultra high thermal conductivity composite material are reviewed. Finally, theoretical enhancement factors along with many heat transfer correlations are presented in this paper for each enhancer. Introduction The way to improve heat transfer performance is referred to as heat transfer enhancement (or augmentation or intensification).Nowadays, a significant number of thermal engineering researchers are seeking for new enhancing heat transfer methods between surfaces and the surrounding fluid.Due to this fact, Bergles [1,2] classified the mechanisms of enhancing heat transfer as active or passive methods.Those which require external power to maintain the enhancement mechanism are named active methods.Examples of active enhancement methods are well stirring the fluid or vibrating the surface [3].Hagge and Junkhan [4] described various active mechanical enhancing methods that can be used to enhance heat transfer.On the other hand, the passive enhancement methods are those which do not require external power to sustain the enhancements' characteristics.Examples of passive enhancing methods are: (a) treated surfaces, (b) rough surfaces, (c) extended surfaces, (d) displaced enhancement devices, (e) swirl flow devices, (f) coiled tubes, (g) surface tension devices, (h) additives for fluids, and many others. Mechanisms of Augmentation of Heat Transfer To the best knowledge of the authors, the mechanisms of heat transfer enhancement can be at least one of the following. (1) Use of a secondary heat transfer surface. (3) Disruption of the laminar sublayer in the turbulent boundary layer. International Journal of Chemical Engineering (4) Introducing secondary flows. (8) Enhancing effective thermal conductivity of the fluid under dynamic conditions. (13) Modification of radiative property of the convective medium.(14) Increasing the difference between the surface and fluid temperatures.(15) Increasing fluid flow rate passively.(16) Increasing the thermal conductivity of the solid phase using special nanotechnology fabrications. Methods using mechanisms no.(1) and no.(2) include increasing the surface area in contact with the fluid to be heated or cooled by using fins, intentionally promoting turbulence in the wall zone employing surface roughness and tall/short fins, and inducing secondary flows by creating swirl flow through the use of helical/spiral fin geometry and twisted tapes.This tends to increase the effective flow length of the fluid through the tube, which increases heat transfer but also the pressure drop.For internal helical fins however, the effect of swirl tends to decrease or vanish all together at higher helix angles since the fluid flow then simply passes axially over the fins [5].On the other hand, for twisted tape inserts, the main contribution to the heat transfer augmentation is due to the effect of the induced swirl.Due to the form drag and increased turbulence caused by the disruption, the pressure drop with flow inside an enhanced tube always exceeds that obtained with a plain tube for the same length, flow rate, and diameter. Turbulent flow in a tube exhibits a low-velocity flow region immediately adjacent to the wall, known as the laminar sublayer, with velocity approaching zero at the wall.Most of the thermal resistance occurs in this lowvelocity region.Any roughness or enhancement technique that disturbs the laminar sublayer will enhance the heat transfer [6].For example, in a smooth tube of 25.4 mm inside diameter, at Re = 30, 000, the laminar sublayer thickness is only 0.0762 mm under fully developed flow conditions.The internal roughness of the tube surface is well-known to increase the turbulent heat transfer coefficient.Therefore, for the example at hand, an enhancement technique employing a roughness or fin element of height ∼ 0.07 mm will disrupt the laminar sublayer and will thus enhance the heat transfer.Accordingly, mechanism no.(3) is a particularly important heat transfer mechanism for augmenting heat transfer. Li et al. [5] described the flow structure in helically finned tubes using flow visualization by means of high-speed photography employing the hydrogen bubble technique.They used four tubes with rounded ribs having helix angles between 38 • and 80 • and one or three fin starts, in their investigation.Photographs taken by them showed that in laminar flow, bubbles follow parabolic patterns whereas in the turbulent flow, these patterns break down because of random separation vortices.Also, for tubes with helical ridges, transition to turbulent flow was observed at lower Reynolds numbers compared to smooth tube values.Although swirl flow was observed for all tubes in the turbulent flow regime, the effect of the swirl was observed to decrease at higher helix angles.Li et al. [5] concluded that spiral flow and boundary-layer separation flow both occurred in helicalridging tubes, but with different intensities in tubes having different configurations.As such, mechanisms no.(4) and no.(5) are also important heat transfer mechanisms for augmenting heat transfer. Arman and Rabas [7] discussed the turbulent flow structure as the flow passes over a two-dimensional transverse rib.They identified the various flow separation and reattachment/redevelopment regions as: (a) a small recirculation region in front of the rib, (b) a recirculation region after the rib, (c) a boundary layer reattachment/redevelopment region on the downstream surface, and finally (d) flow up and over the subsequent rib.The authors noting that recirculation eddies are formed above these flow regions, identified two peaks that occur in the local heat transfer-one at the top of the rib and the other in the downstream recirculation zone just before the reattachment point.They also stated that heat transfer enhancement increases substantially with increasing Prandtl number.Therefore, the mechanism no.(6) plays an important role in heat transfer enhancements. Heat transfer enhancements associated with fully/partially filing the fluidic volume by the porous medium take place by the following mechanisms [8,9].(7) Enhancing effective thermal conductivity of the fluid under static conditions.(8) Enhancing effective thermal conductivity of the fluid under dynamic conditions.(9) Delaying the boundary layer development.(10) Thermal dispersion.(11) Increasing the order of the fluid molecules.(12) Redistribution of the flow.(13) Modification of radiative property of the convective medium. Ding et al. [10] showed that fluids containing 0.5 wt.% of carbon nanotubes (CNT) can produce heat transfer enhancements over 250% at Re = 800, and the maximum enhancement occurs at an axial distance of approximately 110 times the tube diameter.These types of mixtures are named in the literature as "nanofluids" and it will discussed later on in this report.The increases in heat transfer due to presence of nanofluids are thought to be associated with the following mechanisms.(7) Enhancing effective thermal conductivity of the fluid under static conditions. (8) Enhancing effective thermal conductivity of the fluid under dynamic conditions. Flexible fluidic thin films were introduced in the work of Khaled and Vafai [11,12] and Khaled [13].In their works, they describe a new passive method for enhancing the cooling capability of fluidic thin films.In summary, flexible thin films utilize soft seals to separate between their plates instead of having rigid thin film construction.Khaled and Vafai [11] have demonstrated that more cooling is achievable when flexible fluidic thin films are utilized.The expansion of the flexible thin film including flexible microchannel heat sink is directly related to the average internal pressure inside the microchannel.Additional increase in the pressure drop across the flexible microchannl not only increases the average velocity but also, it expands the microchannel causing an apparent increase in the coolant flow rate which, in turn, increases the cooling capacity of the thin film.Khaled and Vafai [12] and Khaled [13] have demonstrated that the cooling effect of flexible thin films can be further enhanced if the supporting soft seals contain closed cavities filled with a gas which is in contact with the heated plate boundary of the thin film.They referred to this kind of sealing assembly as "flexible complex seals".The resulting fluidic thin film device is expandable according to an increase in the working internal pressure or an increase in the heated plate temperature.Therefore, mechanism no.(15) can have an impact in enhancing heat transfer inside thermal systems.Finally, the mechanism no.(14) finds its applications when rooted fins are utilized as an enhancer of heat transfer [14].Finally, the last mechanism will be discussed when the topic of ultra high thermal conductivity composite materials is discussed. Heat Transfer Enhancers From the concise summary about mechanisms of enhancing heat transfer described in last section, it can be concluded that these mechanisms can not be achieved without the presence of the enhancing elements.These elements will be called as "heat transfer enhancers".In this report, the following heat transfer enhancers will be explained: 3.1.Extended surfaces (Fins); 3.2.Porous media; 3.3.Large particles suspensions; 3.4.Nanofluids; 3.5.Phase change devices; 3.6.Flexible seals; 3.7.Flexible complex seals; 3.8.Vortex generators; 3.9.Protrusions; and 3.10.Ultra high thermal conductivity composite materials. Heat Transfer Enhancement Using Extended Surfaces (Fins) 3.1.1.Introduction.Fins are quite often found in industry, especially in heat exchanger industry as in finned tubes of double-pipe, shell-and-tube and compact heat exchangers [15][16][17][18][19][20].As an example, fins are used in air cooled finned tube heat exchangers like car radiators and heat rejection devices.Also, they are used in refrigeration systems and in condensing central heating exchangers.Moreover, fins are also utilized in cooling of large heat flux electronic devices as well as in cooling of gas turbine blades [21].Fins are also used in thermal storage heat exchanger systems including phase change materials [22][23][24][25].To the best knowledge of the authors, fins as passive elements for enhancing heat transfer rates are classified according to the following criteria. (1) Geometrical design of the fin. (3) Number of fluidic reservoirs interacting with the fin. (4) Location of the fin base with respect to the solid boundary. Laminar Single-Phase Heat Transfer in Finned Tubes. Laminar flow generally results in low heat-transfer coefficients and the fluid velocity and temperature vary across the entire flow channel width so that the thermal resistance is not just in the region near the wall as in turbulent flow.Hence, small-scale surface roughness is not effective in enhancing heat transfer in laminar flow; the enhancement techniques employ some method of swirling the flow or creating turbulence [6].Laminar flow heat transfer and pressure drop in "microfin" tubes (discussed later) was experimentally measured by [35].Their data showed that the heat transfer and pressure drop in microfin tubes were just slightly higher than in plain tubes and they recommended that microfin tubes not be used for laminar flow conditions.This outcome has also been confirmed in the investigations of Shome and Jensen [36] who concluded that "microfinned tube and tubes with fewer numbers of tall fins are ineffective in laminar flows with moderate free convection, variable viscosity, and entrance effects as they result in little or no heat transfer enhancement at the expense of fairly large pressure drop penalty". Turbulent Single-Phase Heat Transfer in Finned Tubes. Turbulent flow and heat transfer in finned tubes has been widely studied in the past and the literature available on the experimental investigations of turbulent flow and heat transfer in finned tubes is quite extensive.One of the earliest experimental work on the heat transfer and pressure drop characteristics of single-phase flows in internally finned tubes dates back to 1964 when Hilding and Coogan [37] presented their data for ten different internal fin geometries for a 0.55 in.(14 mm) inner diameter copper tube with 0.01 in.(0.254 mm) straight brass fins using air as the test fluid.Hilding and Coogan [37] observed that the heat transfer is enhanced by around 100-200% over that of the smooth tube and the enhancement is accompanied by a similar increase in the pressure drop.The Reynolds number in this study ranged from 15, 00 to 50, 000. Kalinin and Yarkho [38] used different fluids in the range 1500 ≤ Re ≤ 400, 000 and 7 ≤ Pr ≤ 50 to investigate the effect of Reynolds and Prandtl numbers on the effectiveness of heat transfer enhancement in smoothly outlined internally grooved tubes.The ranges of the transverse groove heights tested were 0.983 ≤ d/D ≤ 0.875, (where d is the fin tip diameter and D is the inner diameter) with maximum groove spacing equal to the pipe nominal diameter.They reported that the critical Reynolds number at which transition to turbulent flow occurs decreased from 2400 for a smooth tube to 1580 at e/D = 0.875 and fin spacing equal to half the pipe diameter for a grooved tube, with a maximum increase in the heat transfer coefficient of up to 2.2 times the minimum measured value.The authors also observed that the behavior of the Nusselt number for the tested range of Prandtl numbers is independent of the Prandtl number. In their two papers, Vasilchenko and Barbaritskaya [39,40] published their results for the heat transfer and pressure drop of turbulent oil flow in straight finned tubes with 4 ≤ N ≤ 8 and 0.13 ≤ e/D ≤ 0.3, for an operating condition range of 10 3 ≤ Re ≤ 10 4 and 70 ≤ Pr ≤ 140.Their results showed that the heat transfer is enhanced by 30% to 70% over that of smooth tubes for the finned tubes tested.Correlations for predicting the friction factors and the Nusselt numbers were also presented. In the work of Bergles et al. [41], heat transfer and pressure drop data for straight and spiral finned tubes of fin heights from 0.77 to 3.3 mm with water as the working fluid was investigated.The Reynolds number based on the hydraulic diameter ranged from around 1, 500 to 50, 000.They found an earlier transition from laminar to turbulent flow and their friction factor data indicated that the smooth tube friction factor correlations could also be used for the tested finned tubes in the turbulent region.The heat transfer coefficients were found to be up to twice that of comparable smooth tubes.From their heat transfer data, they concluded that the hydraulic diameter approach is effective for correlation only in the case of straight fins of moderate heights. Watkinson et al. [42,43] in their two separate works, performed experiments for water and air flows, respectively, in a tube-in-tube heat exchanger under isothermal heating conditions to study the turbulent heat transfer and pressure drop characteristics of straight and helically finned tubes.A total of eighteen tubes, 5 with straight fins and 13 with spiral fins, having internal fin geometries with fin starts from 6 to 50, 0.026 ≤ e/D ≤ 0.158, helix angles from 0 • to 15 • , and inside diameters of 0.420 to 1.196 inch were examined for 7 × 10 3 ≤ Re ≤ 3 × 10 5 and 0.7 ≤ Pr ≤ 3.4.A commercial smooth copper tube was also tested for comparison.The air results presented show that for most tubes, from Re = 50, 000 up to Re = 300, 000 the heat transfer is enhanced up to 95% over that of a smooth tube.On the other hand, in water flow tests, at Re = 50, 000 heat transfer is enhanced up to 87% over that of a smooth tube, but at higher Reynolds number, the finned tubes approached smooth tube heat transfer performance.A maximum increase of 100% in the pressure drop over smooth tube was observed for tubes with tall helical fins.Separate empirical nondimensional heat transfer correlations were presented for water and air, for both straight and spiral fin tubes, having a form similar to the smooth tube turbulent Sieder-Tate correlation with additional parameters consisting of the ratios of the inter fin spacing to the tube diameter and the fin pitch to the tube diameter.For straight fin tubes, the inter fin spacingto-diameter ratio and for spiral finned tubes the pitch-todiameter ratio were incorporated to form a modified Blasiustype correlation for predicting the friction factor.These correlations predicted their data to within a maximum error of 13%. Carnavos [44] tested eight finned tubes (both straight and helically finned) to obtain heat transfer and pressure drop data for cooling of air in turbulent flow employing a double tube heat exchanger.The tubes tested had fin starts from 6 to 16 and helix angles from 2.5 • to 20 • .The results were presented on the hydraulic diameter basis for the range 10 4 ≤ Re h ≤ 10 5 and correlations were proposed to predict the heat transfer and the pressure drop.The reported heat transfer correlation was in the form of a modified Dittus-Boelter single-phase correlation having an additional correction factor "F", consisting of the ratios of the nominal heat transfer area to the actual heat transfer area and the actual flow area to the core flow area, respectively.The secant of the helix angle raised to the third power was also included in "F".The friction factor was also in the form of a modified Blasius type equation with a correction factor "F * " comprising of the ratio of the actual flow area-to-the nominal flow area and the secant of the helix angle.In a later investigation, Carnavos [45] used the same apparatus and three more tubes with number of fin starts and helix angles up to 38 • and 30 • , respectively, to extend his air results by including experimental data conducted for heating of water and a 50% w/w ethylene glycol-water solution.The data for air obtained earlier [44] was reexamined and a set of correlations that predicted their entire data obtained with air, water, and ethylene glycol-water solution to within ±10%, were proposed in the ranges 10 4 ≤ Re h ≤ 10 5 , 0.7 ≤ Pr ≤ 30, and 0 ≤ α ≤ 30 • .The Nusselt number correlation was allowed to retain its original form whereas the helix angle dependency in the friction factor correlation was slightly changed. Armstrong and Bergles [46] conducted experiments for electric heating of air in the range 9, 000 ≤ Re ≤ 120, 000 and Pr = 0.71, using seven different silicon carbide finned tubes all having straight fins.The tubes tested had fin starts from 8 to 24 and e/D from 0.06 to 0.15.The results indicated that the heat transfer is enhanced by around 30-100% over that of a smooth silicon carbide tube.Their heat transfer data was predicted to within ±20% by the Carnavos [45] heat transfer correlation, but a large disagreement was observed between the measured friction factors and those predicted by the Carnavos [45] friction factor correlation. Since their introduction more than 20 years ago, "microfin" tubes have received a lot of attention playing a very significant role in modern, high-efficiency heat transfer systems.Microfin enhancements are of special interest because the amount of extra material required for microfin tubing is much less than that required for other types of internally finned tubes [47].Of the many enhancement techniques which have been proposed, passively enhanced tubes are relatively easy to manufacture, cost-effective for many applications, and can be used for retrofitting existing units, whereas active methods, such as vibrating tubes, are costly and complex [48].Moreover, these tubes ensure a large heat transfer enhancement with a relatively small increase in the pressure drop penalty.Microfin tubes are typically made of copper and have an outside diameter between 4 and 15 mm.The principal geometric parameters that characterize these tubes [49] are: the external diameter, fin height (from 0.075 mm to 0.4 mm), helix angle (from 10 • to 35 • ), and the number of fin starts (from 50 • to 60 • ).These dimensions are in contrast to other types of internal finning that seldom exceed 30 fins per inch and fin heights that range from several factors higher than the microfin tube height.Currently, tubes with axial and helical fins, in rectangular, triangular, trapezoidal, crosshatched, and herringbone patterns are available.Important dimensionless geometric variables of an internally microfinned tubes include the dimensionless fin height (ε/D, fin height/internal diameter) and the dimensionless fin pitch (p/ε, fin spacing/fin height).A microfin tube typically has 0.02 ≤ ε/D ≤ 0.04 and 1.5 ≤ p/D ≤ 2.5, [50].As microfinnned tubes are typically used in evaporators and condensers, thus most of the extensive existing research literature on microfinned tube performance characteristics is devoted to two-phase refrigerant flows.Schlager et al. [51] and Khanpara et al. [52] are typical examples of such investigations, showing a 50% to 100% increase in boiling and condensation heat transfer coefficients with only a 20% to 50% increase in pressure drop.However, single-phase performance of microfinned tubes is also an important consideration in the design of refrigeration condensers as a substantial proportion of the heat transfer area of these condensers is taken up in the desuperheating and later subcooling of the refrigerant.Consequently, accurate correlations for predicting the single-phase heat transfer and pressure drop inside microfinned tubes are necessary in order to predict the performance of these condensers and to optimize the design of the system.Khanpara et al. [52] investigated the heat transfer characteristics of R-113, testing eight microfinned tubes in the range 60 ≤ N ≤ 70, 0.005 ≤ e/D ≤ 0.02, and helix angles from 8 • to 25 • , for 5 × 10 3 ≤ Re ≤ 11 × 10 3 .The results presented for the single phase heating of R-113 indicated that the heat transfer is enhanced by around 30%-80%.The authors concluded that a major part of the enhancement is due to the increase in the area available for heat transfer and a part of the enhancement is due to flow separation and flow swirling effects induced by the helical fins.This is because the corresponding increase in the heat transfer area over that in a smooth tube is around 10%-50% for the tubes tested.In a subsequent paper, Khanpara et al. [53] also reported the local heat transfer coefficients for single-phase liquid R-22 and R-113 flowing through a smooth and an internally finned tube of 9.52 mm outer diameter, in their paper on in-tube evaporation and condensation characteristics of microfinned tubes.The single-phase experiments were performed by direct electrical heating of the tube walls in the Reynolds number range of 5, 000 to 11, 000 for R-113 and 21, 000 to 41, 000 for R-22.The microfinned tube had of 60 fin starts, a fin height of 0.22 mm, and a helix angle of approximately 17 • .The heat transfer coefficients for the internally finned tube were found to be 50% to a 150% higher than the smooth tube values. AI-Fahed et al. [54] experimentally tested a single microfinned tube with a 15.9 mm outside diameter having 70 helical fins with fin height of 0.3 mm and a helix angle of 18 • using water as the test fluid in a tube-in-tube heat exchanger.Results were presented for isothermal heating conditions in the range 10, 000 < Re < 30, 000.Under the same conditions, comparative experiments with an internally smooth tube were also conducted.They noted that the heat transfer is enhanced by 20%-80% and the pressure drop is increased by around 30%-80% as compared to the smooth tube values.The experimentally obtained friction and heat transfer data were correlated as a Blasius and a Sieder-Tate type correlation, respectively.The heat transfer correlation predicted their data to within ±25%, showing a large error band while no error band was reported for the friction factor results.The authors reasoned that at Re > 25, 000 the heat transfer enhancement ratio is moderate plausibly because at higher Re numbers the turbulence effect in microfinned tubes becomes similar to that in a plain tube. Chiou et al. [55] conducted an experimental study with water using two internally finned tubes having the same outer diameter equal to 0.375 in.(9.52 mm).The two tubes had 60 and 65 fins, fin heights of 0.008 in (0.20 mm) and 0.01 in.(0.25 mm), and helix angles of 18 and 25 • , respectively.The Reynolds number in this study ranged from about 4, 000 to 30, 000.Modified Dittus-Boelter type correlations were formulated to predict the value of the heat transfer coefficient for flow Reynolds number greater than about 15000 and 13000, respectively, for each tube.An available heat-momentum analogy based correlation for rough tubes along with a set of constitutive equations for calculating related roughness parameters was utilized to propose correlations for predicting the friction factor and the Nussult number for the entire range of the Reynolds number tested. Brognaux et al. [50] obtained experimental single-phase heat transfer coefficients and friction factors for three singlegrooved and three cross-grooved microfin tubes all having an equivalent diameter of 14.57 mm, a fin height of 0.35 mm, International Journal of Chemical Engineering and 78 helical fins; only the fin helix angle was allowed to vary between 17.5 • and 27 • Using liquid water and air as the test fluid the experiments were carried out in a double-pipe heat exchanger.Results were presented for cooling conditions in the range 2500 < Re < 50, 000 and 0.7 < Pr < 7.85.Validation experiments with an internally smooth tube were also conducted using water and air.Compared to a smooth tube, the maximum heat transfer enhancement reported was 95% with a pressure drop increase of 80% for water at Pr = 6.8.They also found that the friction factors in microfin tubes do not reach a constant value at high Reynolds numbers as is usually observed in rough pipes.The authors also used their data in the range 0.7 < Pr < 7.85 (using only 2 of the tested tubes) to analyze the dependence of the heat transfer on the Prandtl number exponent.Using the heatmomentum transfer analogy as applied for rough surfaces, they presented their experimental heat transfer and friction factor data as functions of the "roughness Reynolds number" and from cross-plots deduced the Prandtl number exponent to be between 0.56-0.57.The Prandtl number exponent between 0.55-0.57was also determined for the power law formulation Nu = CRe m Pr n .The authors also defined an "efficiency index" (which gives the ratio of the increase in heat transfer to the increase in friction factor for a finned and plain tube, resp.) and presented its value for the different tubes tested.The higher the efficiency index, the better the enhancement geometry. Huq et al. [56] presented experimental heat transfer and friction data for turbulent air flow in a tube having internal fins in the entrance region as well as in the fully-developed region.The tube/fin assembly was cast from aluminum to avoid any thermal contact resistance.The uniformly heated test section was 15.2 m in length and the inner diameter of the tube was 70 mm which contained six equally spaced fins of height 15mm.The Reynolds number based on hydraulic diameter ranged from 2.6 × 10 4 to 7.9 × 10 4 .The results presented by the authors exhibited high pressure gradients and high heat transfer coefficients in the entrance region, approaching the fully developed asymptotic values away from the entrance section.The enhancement of heat transfer rate due to integral fins was reported to be very significant over the entire range of flow rates studied in this experiment.Heat transfer coefficient, based on inside diameter and nominal area of finned tube exceeded unfinned tube values by 97% to 112% for the tested Reynolds number range.When compared at constant pumping power, an improvement as high as 52% was also observed for the overall heat transfer rate. With the expressed objective of developing physically based generally applicable correlations for Nusselt number and friction factor for the finned tube geometry, Jensen and Vlakancic [57] carried out a detailed experimental investigation of turbulent fluid flow in internally finned tubes covering a wide range of fin geometric and operating conditions.Two geometrically identical double pipe heat exchangers were used.The test fluid (water and ethylene glycol were used) flowed through the tube side of each of the heat exchangers in counter-flow with hot water in one test section and cold water in the other.Friction factor tests were also conducted under isothermal conditions.A total of sixteen pairs of tubes (15 finned and one smooth tube) with a wide range of geometric variations (inside diameter 24.64-21.18mm, helix angles 0 • -45 • , fin height 0.18mm-2.06mm, and number of fins 8-54) were tested.In the reported results, the authors first described the parametric effects of different fin geometries on turbulent friction factors and Nusselt numbers in internally finned tubes and then go on to prescribe a criterion for labeling a tube as a "high" fin tube (2e/D > 0.06) or a "micro" fin tube.They stated that a microfin tube is characterized by its peculiar pressure drop behavior with long lasting transitional flow up to Re = 20, 000.Trends in the reported data are different depending on whether the tube is a high-fin or a microfin tube.High fin tubes show friction factors curves similar to those of a smooth tube, only displaced higher with the friction factor increasing as the number of fins increases.For microfin tubes, in general, the friction factor is insensitive to the fin height and the Reynolds number up to Re = 20, 000, but beyond this value the friction factor showed a decreasing trend with increasing Re as in smooth tubes and the effect of number of fins, fin height, and the helix angle also comes into play, whenever anyone of these parameter is increased the friction factor increased (exceptions may occur due to difference in fin profile).Overall the reported increase in friction factor for the high finned tubes ranged from 40%-170% and in microfin tubes from 40%-140%, over smooth tubes.For both types of tubes the reported trends of the slope of the Nu curves generally followed that of the smooth tube; however, the trends revealed a different slope at lower Re for the two categories of tubes.This characteristic was attributed by the authors to the greater capacity of swirling flow for higher finned tubes.However, the trends with geometry were similar to those noted for the friction factors.Overall, the reported increase in Nu for the high finned tubes ranged from 50%-150% and in microfin tubes from 20%-220% over smooth tubes.They reported that the correlations from the literature poorly predict their data and based on the findings from the trends observed went on to develop new correlations for friction factors and Nusselt numbers separately for the two categories of tube categories (high and microfin) identified by them.These correlations are applicable to a wide range of geometric and flow conditions for both categories of tubes and estimated well their data as well as the data from the literature. Webb et al. [58] investigated the heat transfer and fluid flow characteristics of internally helical ribbed tubes.Using liquid water as the test fluid the experiments were carried out in a double-pipe heat exchanger.Results were presented for cooling conditions in the range 20, 000 < Re < 80, 000 and 5.08 < Pr < 6.29.A total of eight tubes (7 ribbed and one smooth tube) all having an inside diameter of 15.54 mm but a wide range of geometric variations (helix angles 25 • -45 • , rib height 0.327 • mm-0.554mm, and number of fin starts 10-45) were tested.The authors presented power law based empirical correlations using their experimental data for the Colburn j-factor and the fanning friction factor, which predicted their data reasonably well.The finned tube performance efficiency index as defined by Brognaux et al. [50] was also determined for the tubes tested, from which the authors concluded that the two key factors that affect the increase of the heat transfer coefficient in helically-ribbed tubes are the area increase and fluid mixing in the inter fin region caused by flow separation and reattachment, and the combination of the two determines the level of the heat transfer enhancement. Copetti et al. [49] tested a single internally microfinned tube of 9.52 mm diameter using water as the test fluid.Microfin height was 0.20 mm, fin helix angle was 18 • , and number of fin starts was 60. Results were presented for uniform heating conditions in the range 2, 000 < Re < 20, 000.Under the same conditions, comparative experiments with an internally smooth tube were also conducted.They noted that the microfin tube provides higher heat transfer performance than the smooth tube although the pressure drop increase is also substantial (in turbulent flow h microfin /h smooth = 2.9 and Δp microfin /Δp smooth = 1.7 at the maximum Reynolds number tested).The finned tube performance efficiency index as defined by Brognaux et al. [50] were also determined which showed that the heat transfer increase was always superior to the pressure drop penalty.The experimentally obtained Nusselt numbers were empirically correlated separately as a Dittus-Boelter, a Sieder-Tate, and a Gnielinski type correlation.These correlations predicted their data reasonably well. Wang and Rose [59] compiled an experimental database of twenty-one microfin tubes, covering a wide range of tube and fin geometric dimensions, Reynolds number and including data for water, R11, and ethylene glycol for friction factor for single-phase flow in spirally grooved, horizontal microfin tubes.The tubes had inside diameter at the fin root between 6.46mm and 24.13 mm, fin height between 0.13mm and 0.47 mm, fin pitch between 0.32mm and 1.15 mm, and helix angle between 17 • and 45 • .The Reynolds number ranged from 2.0 × 10 3 to 1.63 × 10 5 .Six earlier friction factor correlations, each based on restricted data sets, were compared with the database as a whole.They reported that none was found to be in good agreement with all of the data and indicated that the Jensen and Vlakancic [57] correlation was found to be the best and represented their database within ±21%. Han and Lee [60] obtained experimental single-phase heat transfer coefficients and friction factors for four micro finned tubes all with 60 helical fins using liquid water as the test fluid in a double-pipe heat exchanger.The tubes tested had fin helix angles between 9.2 and 25.2 • and fin height between 0.12 mm and 0.15 mm.Results were presented for cooling conditions in the range 3000 < Re < 40, 000 and 4 < Pr < 7. Validation experiments with an internally smooth tube were also conducted.Using the heatmomentum transfer analogy, as used by Brognaux et al. [50], they presented their experimentally determined heat transfer and friction factor correlations as functions of the roughness Reynolds number, Re ε , with a mean deviation and root mean square deviation of less than 6.4%.They noted that the microfin tubes show an earlier achievement of the fully rough region which starts at Re ε = 70 for rough pipes and also validated the finding of Brognaux et al. [50] that the friction factors in microfin tubes do not reach a constant value at high Reynolds numbers as is usually observed in rough pipes surface.No attempt was made to present a direct comparison of heat transfer enhancement between a smooth and a microfinned tube, but an efficiency index was defined.Smaller value of efficiency index means increased friction penalty to establish a given enhancement level.Using this index, the authors noted that the tubes with higher relative roughness and smaller spiral angle show a better heat transfer performance than tubes with larger spiral angle and smaller relative roughness.The authors concluded that the heat transfer area augmentation by higher relative roughness is the main contributor to the efficiency index. Li et al. [61] experimentally investigated the single-phase pressure drop and heat transfer in a microfin tube with a 19 mm outside diameter having 82 helical fins with a fin height of 0.3 mm and a helix angle of 25.5 • using oil and water as the test fluid in a tube-in-tube heat exchanger.Results were presented for cooling conditions in the range 2500 < Re < 90, 000 and 3.2 < Pr < 220.The pressure drop data were collected under adiabatic conditions.Under the same conditions, comparative experiments with an internally smooth tube were also conducted.Their results showed that there is a critical Reynolds number, Re cr , for heat transfer enhancement.For Re < Re cr , the heat transfer in the microfin tube is the same as that in a smooth tube, but for Reynolds numbers higher than Re cr , the heat transfer in the microfin tube is gradually enhanced compared with a smooth tube.It reaches more than twice that in a smooth tube for Reynolds numbers greater than 30, 000 with water as the working fluid.They attributed this behavior to the decrease in the thickness of the viscous sublayer with increasing Reynolds numbers.When the microfins are inside the viscous sublayer, the heat transfer is not enhanced, while when the microfins are higher than the viscous sublayer, heat transfer is enhanced.They also investigated the Prandtl number dependency of the Nusselt number in the form of NuαPr n and found that the Nusselt number is proportional to Pr 0.56 in the enhanced region and is proportional to Pr 0.3 in the nonenhanced region.For the high Prandtl number working fluid (oil, 80 < Pr < 220), the critical Reynolds number for heat transfer enhancement is about 6000, while for the low Prandtl number working fluid (water, 3.2 < Pr < 5.8), the critical Reynolds number for heat transfer enhancement is about 10, 000.The reported friction factors in the microfin tube are almost the same as for a smooth tube for Reynolds numbers below 10, 000.They become higher for Re > 10, 000 and reach values 40%-50% greater than that in a smooth tubes for Re > 30, 000.They also concluded that the friction factors in the microfin tube do not behave as in a fully rough tube even at a Reynolds number of 90, 000. An artificial neural network (ANN) approach was applied by Zdaniuk et al. [62] to correlate experimentally determined Colburn j-factors and Fanning friction factors for flow of liquid water in straight tubes with internal helical fins.Experimental data came from eight enhanced tubes reported later in Zdaniuk et al. [63].The performance of the neural networks was found to be superior compared to International Journal of Chemical Engineering the corresponding power-law regressions.The ANNs were subsequently used to predict data of other researchers but the results were less accurate.The ANN training database was expanded to include experimental data from two independent investigations.The ANNs trained with the combined database showed satisfactory results, and were superior to algebraic power-law correlations developed with the combined database. Siddique and Alhazmy [64] also tested a single internally microfinned tube with a nominal inside diameter of 7.38 mm.Microfin height was 0.20 mm, helix angle was 18 • , and number of fin starts was 50.Experiments were conducted in a double pipe heat exchanger with water as the cooling as well as the heating fluid for six sets of runs.The pressure drop data were collected under isothermal conditions.Data were taken for turbulent flow with 3300 ≤ Re ≤ 22, 500 and 2.9 ≤ Pr ≤ 4.7.The heat transfer data were correlated by a Dittus-Boelter type correlation, while the pressure drop data were correlated by a Blasius type correlation.These correlations predicted their data to within 9% and 1%, respectively.The correlation predicted values for both the Nusselt number and the friction factors were compared with other studies.They found that the Nusselt numbers obtained from their correlation fall in the middle region between the Copetti et al. [49] and the Gnielinski [65] smooth tube correlation predicted Nusselt number values.For pressure drop results, they reported the existence of a transition zone for Re < 11, 500 in which the friction factor data exhibited a local maxima.The presented correlation predicted friction factors values were nearly double that of the Blasius smooth tube correlation predicted friction factors.The authors concluded that the rough tube Gnielinski [65] and Haaland [66] correlations can be used as a good approximation to predict the finned tube Nusselt number and friction factor, respectively, in the tested Reynolds number range. Zdaniuk et al. [63] experimentally determined the heat transfer coefficients and friction factors for eight helically finned tubes and one smooth tube using liquid water at Reynolds numbers ranging between 12, 000 and 60, 000.The helically-finned tested tubes had helix angles between 25 • and 48 • , number of fin starts between 10 and 45, and fin height-to-diameter ratios between 0.0199 and 0.0327.Power-law correlations for Fanning friction and Colburn jfactors were developed using a least-squares regression using five simple groups of parameters identified by Webb et al. [58].The performance of the correlations was evaluated with independent data of Jensen and Vlakancic [57] and Webb et al. [58] with average prediction errors in the 30% to 40% range.The authors also gave recommendations about the use of some specific tubes used in their experimentations and concluded that disagreements in the experimental results of Webb et al. [58], Jensen and Vlakancic [57], and their own study imply that a broader database of heat transfer and friction characteristics of flow in helically ribbed tubes is desirable.The authors further recommended that more research should be performed on the influence of geometric parameters on flow patterns, especially in the inter fin region using modern flow visualization techniques or proven computational fluid dynamics (CFD) tools. In a subsequent analysis, Zdaniuk et al. [67] using genetic programming extended their earlier work [63] presenting a linear regression approach to correlate experimentallydetermined Colburn j-factors and Fanning friction factors for flow of liquid water in helically finned tubes.Experimental data came from the eight enhanced tubes used in their previous study [63] discussed above.This new study revealed that, in helically finned tubes, logarithms of both friction and Colburn j-factors can be correlated with linear combinations of the same five simple groups of parameters identified in their earlier work [63] and a constant.The proposed functional relationship was tested with independent experimental data yielding excellent results.The authors concluded that the performance of their proposed correlations is much better than that of the power law correlations and only slightly worse than that of the artificial neural networks. More recently, Webb [68] investigated the heat transfer and friction characteristics of three tubes (19.05 mm O.D., 17.32 mm I.D.) including one developed by the author (designated as tube T3) having a conical, three-dimensional roughness on the inner tube surface with water flow in the tube.Experiments were conducted in a double tube heat exchanger with water as the cooling as well as the heating fluid.The pressure drop data were collected under adiabatic conditions.The data were taken at a tube side Reynolds number range of 4000-24, 000 and the Prandtl number varied from 6.6 to 5.9.The heat transfer data were correlated by a Dittus-Boelter type correlation, while the pressure drop data were correlated by a Blasius type correlation.The measured maximum uncertainty in the friction factor was reported to be 5.96%, while for most data points, the uncertainty in the measured value of the inside heat transfer coefficient was stated to be 8%.The experimentally obtained values for the Nusselt number were compared with an independent study and were found to be 9%-12% higher.The author reported that the TC3 truncated cone tube provides a Nusselt number 3.74 times that of a plain tube, but it has it has nearly 60% higher pressure drop and concluded that the three-dimensional roughness offers potential for considerably higher heat transfer enhancement (e.g., 50% higher) than is given by helical ridged tubes.Accelerated particulate fouling data were also provided for TC3 tube, and for five different helical-ribbed tubes for 1300 ppm foulant concentration at 1.07 m/s water velocity (Re = 16, 000).The fouling rate was compared with helical-rib geometries reported earlier by Li and Webb [69].The author noted that the TC3 tube shows a very high accelerated particulate fouling rate, which is higher than that of the helical-ribbed tubes tested by Li and Webb [69] and recommended that the 3-D roughness tubes should experience minimal and acceptable low fouling, if used with relatively clean or treated water. Most recently, Bharadwaj et al. [70] experimentally determined pressure drop and heat transfer characteristics of liquid water flowing in a single 75 fin start spirally grooved tube (inside diameter = 14.808 mm, f in helix angle = 23 • , and f in height = 0.3048 mm) with and without a twisted tape insert.Results were presented for uniform heating conditions at Pr = 5.4 in the range 300 < Re < 35, 000. The grooves are clockwise with respect to the direction of flow.The authors noted that for the microfin tube experiments transition-like characteristics begin at Re ∼ 3000 and continued up to Re = 7000.Beyond this value of Re, the friction factor remained nearly constant similar to flow in a rough tube.Power law-based correlation for the friction factors and Nusselt numbers were presented in the ranges of 300 < Re < 3000 (laminar), 3000 < Re < 7000 (transition), and Re > 7000 (turbulent).They noted that in the laminar and turbulent ranges of Re, Nu is almost doubled compared to its value in a smooth tube as predicted by the Dittus Boelter correlation.But, for the transition range 3000 < Re < 7000, the Nu-data almost coincide with the same smoothtube correlation predicted value, indicating no enhancement in heat transfer in this range.Constant pumping power comparison with smooth tube showed that the spirally grooved tube without twisted tape yields maximum heat transfer enhancement of 400% in the laminar range and 140% in the turbulent range.However, for 2500 < Re < 9000, reduction in heat transfer was noticed.For the experiments with the twisted tape inserts having twist ratios of 10.15, 7.95, and 3.4, compared to smooth tube, the heat transfer enhancement due to spiral grooves was found to be even further enhanced.They found that the direction of twist (clockwise and anticlockwise) influences the thermo-hydraulic characteristics.Constant pumping power comparisons with smooth tube characteristics show that in spirally grooved tube with twisted tape, heat transfer increases considerably in laminar and moderately in turbulent range of Reynolds numbers.However, for the spiral tube with anticlockwise twisted tape (Y = 10.15),reduction in heat transfer was noticed over the transition range of Reynolds numbers. From the above comprehensive literature review, it is clear that many studies have been conducted to investigate the heat transfer and pressure drop characteristics of internally finned tubes for single phase situations.However, much of the reported data pertains to large fin systems.Most of the experimental correlations are applicable only for the particular system they were developed for.It is also apparent that quite a few empirical correlations based on experimental data do exist for predicting pressure drop and heat transfer in turbulent flow in finned tubes, but it also seems that there is a substantial disagreement between the results predicted by these different correlations; therefore, a need exists for further research in this area. Mathematical Modeling of Fins General One-Dimensional Model.Consider a fin having a length L and a cross-sectional area A C extending from a base surface.The thermal conductivity of the fin is k f .The convection coefficient for the fin facing the outside fluid stream is h f .It is assumed that h f does not vary with position and that the variation of the fin temperature in the transverse direction is negligible.The x-axis is directed along the fin center line starting from the base.The heat diffusion equation for the fin is where T f and T ∞ are the fin temperature and the external fluid far stream temperature, respectively. Joint-Fins.Consider a thin wall separating two convective media having free stream temperatures T ∞1 and T ∞2 on the left and right sides, respectively.It is assumed that The convective media with temperature T ∞1 is refereed to as the "source" while the other one is refereed to as the "sink".Suppose that a very long fin having a uniform cross-sectional area A C penetrates through the wall linking thermally both the source and the sink.The terminology "joint fin" is used to refer to such kinds of fins.It is assumed that the conduction heat transfer at the fin tips on the source and the sink sides are negligible and that the convection coefficient for the fin in the source side is h f 1 while it is h f 2 on the sink side.Now consider that fin to have a finite length L and insulated from its both ends.The fin heat transfer can be calculated from either the receiver or the sender fin portions.It is equal to the following in a dimensionless form [31] where m 1 and m 2 (the receiver and sender fin portion indices) are equal to Hairy Fin System.Consider a rectangular fin (primary fin) having a length L and a constant perimeter P and a constant cross-sectional area A C .It is extending from a surface which is kept at a base temperature T b .A large number of pin rods (secondary fins) are attached on the outer surface of the primary fin.The resulting fin system is referred as to a "hairy fin system".The secondary fins are uniformly distributed over the primary fin surface.The x-axis is taken along the length of the primary fin starting from the base cross-section while the y-axis is taken in the transverse direction.The one-dimensional conduction heat diffusion equation can be found to be equal to the following [30]: where h d , k d , d, and T f (x) are the convection coefficient between the surface of the secondary fins and the surrounding fluid, thermal conductivity of the secondary fin, secondary fin diameter and the temperature of the main fin at the location of the secondary fin base, respectively.The ratio φ is equal to the total base area of the secondary fins to the total surface area of the primary fin.The index m is equal to Rooted Fins.Consider a fin having a length L and a uniform cross-sectional area A C extending from the interior surface of a wall having a thickness L 1 , (L 1 < L).The temperature of the interior surface is T i while it is equal to T o for the exterior surface.The thermal conductivity of the wall and the fin are k w and k f , respectively.The fin portion facing the outside fluid stream is subject to convection with free stream temperature equal to T ∞ and convection coefficient of h f .It is assumed that h f does not vary with position and that the variation of the fin temperature in the transverse direction is negligible.As such, the performance indicator γ can be equal to the following [14]: where η o is the efficiency of the fin portion facing the outside stream.The surface with x = 0 is the fin root base surface.The factor n is equal to where S is the shape factor.The unit of S is the reciprocal of the length unit.For example, if the wall temperature reaches its one-dimensional temperature field at an average distance t from the surface of the fin, then Biconvection Perimeter-Wise Fins.Consider a fin with uniform cross-section A C having a thermal conductivity k f .A uniform fin portion of perimeter P 1 is subject to convection with a free stream temperature of T ∞1 and a convection heat transfer coefficient of h f 1 while the remaining fin portion of perimeter P 2 is subject to another convection medium with T ∞2 and h f 2 as the convective parameters.Assuming the temperature variation along the cross-section is negligible, the energy equation can be found to be equal to the following [32]: where Biconvection Longitudinal-Wise Fins.Consider a fin with uniform cross-section A C and perimeter P having a thermal conductivity k f and a very long length.The fin portion starting from the base and ending a distance L 1 from the base is subjected to convection with a free stream temperature of T ∞1 and a convection heat transfer coefficient h f 1 . The remaining portion is subjected to another convective medium with T ∞2 and h f 2 as the convective parameters. Assuming the temperature variations along the cross-section is negligible, the fin heat transfer rate can be found to be equal to the following [32]: where T b is the base temperature, m 1 = h 1 P/(kA C ) and m 2 = h 2 P/(kA C ). Biconvection Span-Wise Rectangular Fins.Consider a rectangular thin fin having a thermal conductivity k f , length L and a thickness t.The fin surface is subjected to two different convection conditions.The first region has a span height H 1 while the second region has a span height H 2 . The convection medium facing the first region has T ∞1 and h f 1 as the free stream temperature and the heat transfer convection coefficient, respectively, while T ∞2 and h f 2 are those corresponding to the convection conditions for the second region.For very long fin, Khaled [32] showed that the fin total heat transfer is equal to the following: where Permeable Fins.Consider a permeable fin with uniform cross-section A C (A C = 2Ht) and a perimeter P (P = 2H) having a thermal conductivity k f and a very long length.The fin encounter uniform flow (with density ρ and specific heat c p ) across its thickness with an average suction speed at the upper surface of εV o where ε is the ratio of holes area to the total surface area.The upper surface of the fin is subjected to convection with a free stream temperature of T ∞ and a convection coefficient h f .The direction of V o is from the upper surface to the lower surface of the fin.Khaled [34] has shown that the fin heat transfer rate is obtained from where Pr is the external fluid Prandtl number.He also derived a correlation for θ (0) using similarity solution for the boundary layer problem above the upper surface of the fin.This correlation is equal to The velocities u ∞o and V o are equal to the following: where d and V are the far stream external normal velocity and d is the diameter of the holes on the fin, respectively.μ is the dynamic viscosity of the external fluid. Porous Fins.Heat transfer in porous fins has also gained a recent attention of many researchers.Porous materials of high thermal conductivity have been used to enhance heat transfer as will be discussed in Section 3.2.Kiwan and Al-Nimr [33] were among the recent researchers who numerically investigated the effect of using porous fins on the heat transfer from a heated horizontal surface.The basic philosophy behind this kind of enhancers is to increase the effective area through which heat is convected to ambient fluid in addition to augment the convection heat transfer coefficient by increasing mixing or the thermal dispersions levels between the ambient fluid and the solid phase of the porous fin.They found that using a porous fin with certain porosity might give same performance as conventional fin and save 100 times porosity percent of the fin material. In addition, Kiwan and Zeitoun [71] found that porous fins enhanced the heat transfer coefficient more than 70% compared to that of conventional solid fins under natural convection conditions. Capsulated Liquid Metal Fins.A capsulated fin is a capsule full of a liquid metal made of a very thin metal shell of very high thermal conductivity.The capsulated fins are attached to hot surfaces in different manners and directions, which allow for the activation of natural convective currents in the liquid metal held inside the capsules.There must be some temperature increase in the direction of the gravitational force to activate the free convective currents within the liquid metal.Aldoss et al. [72] introduced this novel type of heat transfer enhancement technique for the first time, and numerically estimated and compared the thermal performance of a liquid metal capsulated fin with that of a conventional solid fin, investigating the effect of several design and operating parameters.Two equal-size geometries for the capsulated fins longitudinal sectional area were considered: the rectangular and the halfcircular fins.It was found that using capsulated fins might enhance the performance over an equal-size conventional solid fin by about 500% for the conventional steel fin, 270% for the conventional solid sodium fin, and 150% for the conventional aluminum fin for a fin length to width ratio of 5. Using capsulated fins is justified in applications that involve a high-base temperature, high height-to-width aspect ratio, and a high external convective heat transfer coefficient. Models for Heat Transfer Correlations for Microfins. In experimental work on turbulent single-phase heat transfer in internally finned tubes, most of the heat transfer correlations can be broadly classified as simple nondimensional empirical correlations, compounded/modified nondimensional empirical correlations containing additional variables adjudged to be of controlling importance by the researcher, and lastly correlations based on the heat-momentum transfer analogy model for rough surfaces as first proposed by Dipprey and Sabersky [73] are used for microfinned tubes.The simple nondimensional correlations use the governing differential conservation equations of the tube side fluid turbulent boundary layer as a basis for defining the appropriate nondimensional groups for correlating the heat transfer data.Thus, the Nusselt number is proposed as a function of the Reynolds and Prandtl numbers.The heat transfer data from these investigations are typically correlated by a relationship of the form the so-called Dittus Boelter type correlation.The variables h, d and k are the fully developed convection heat transfer International Journal of Chemical Engineering coefficient, pipe inner diameter, and the fluid thermal conductivity, respectively.The experimental data is then used to find coefficients C, m, and n using regression analysis or cross-plotting. In experimental works where very limited testing has been done, for example, using only a single microfinned tube, very simple correlations of the Dittus Boelter type have been proposed to explain the data trend.A good example of this type of a correlation is found in the work of Chiou et al. [55], in which the average Nusselt numbers from the experimentally obtained data were correlated as Nu = 0.043Re 0.8 Pr 0.4 . ( In the same way Copetti et al. [49], have correlated their heat transfer data as Another variation of this approach is seen in the work of Al-Fahed et al. [54] who correlated the average Nusselt numbers from their experimentally obtained data in a Sider-Tate type correlation as As stated above, the compounded/modified nondimensional empirical correlations contain additional variables adjudged to be of controlling importance by the researcher.A good example of this approach is seen in the work of Carnavos [45], who conducted an extensive experimental investigation using eleven internally finned tubes with air, water, and ethylene glycol-water solutions.His heat transfer correlation is given as where We see that this heat transfer correlation is in the form of a modified Dittus-Boelter single-phase correlation having an additional correction factor "F", consisting of the ratios of the actual flow area (A fa ) to the core flow area (A fc ) and the nominal heat transfer area (A n ) to-the actual heat transfer area (A a ), respectively.The secant of the helix angle (α) raised to the third power is also included in "F".Another variation of nondimensional correlation type is to fit the heat transfer data in terms of the so called Colburn j-factor, modified by additional variables adjudged to be of controlling importance.An example of this can be seen in the work of Zdaniuk et al. [63] who experimentally determined the heat transfer coefficients and friction factors for eight helically-finned tubes and one smooth tube using liquid water.Their heat transfer correlation is given as where N s is the number of fin starts, e/D is the fin height to diameter ratio, and α is the helix angle. Using the heat-momentum transfer analogy as applied for smooth surfaces, Dipprey and Sabersky [73] showed that a similar analogy model is applicable for sand-grain type wall roughness.For turbulent flow in a rough circular tube, this can be given as where the Nusselt number is a function of Reynolds and Prandtl numbers and the friction factor ( f ) which depends on the geometrical variables (e/D, N s and α) of the microfin tube.B(e + ) is the friction similarity function for the rough/microfinned tube where e + is the roughness Reynolds number, while the g(e + ) is the correlating function for the rough/microfinned tube determined separately.These functions will be different for different roughness types, that is, for geometrically different microfinned tubes.An example of this type of heat transfer correlation is found in the work of Copetti et al. [49], who have also correlated their microfinned tube heat transfer data as [74].Moreover, the convective heat transfer coefficient is larger for systems filled with porous material than the systems without porous material due to the large thermal conductivity of the porous matrix compared with the fluid thermal conductivity, especially for gas flows.However, porous media results in substantial pressure drop [9].To minimize pressure drop, partially fillings of porous media can be used.Partial filling of the porous medium has advantage of reducing the pressure drop compared with a system filled completely with porous medium [9].A partial filling of a channel with porous media redirects the flow to escape from the core porous region, depending on the permeability of the porous medium, to the outer region.This effect reduces the boundary layer thickness.As such, the rate of heat transfer is enhanced.Moreover, the porous medium increases the effective thermal conductivity and heat capacity of the fluid-porous material medium and the solid matrix enhances the rate of radiative heat transfer in a system especially if the gas is the working fluid.In summary, the heat transfer enhancements associated with partial filing of the porous medium take place by three mechanisms: (i) flow redistribution, (ii) thermal conductivity modification, and (iii) radiative property modification of the medium. Many works have been conducted in the domain of partially fillings of porous media.For example, Jang and Chen [75] conducted a numerical study for a forced flow in a parallel channel partially filled with a porous medium by adopting the Darcy-Brinkman-Forchheimer model with a thermal dispersion term.Chikh et al. [76,77] presented an analytical solution for the fully developed flow in annulus configuration partially filled with porous medium.Al-Nimr and Alkam [78] extended the analysis to the transient solution for annulus flow with porous layer.They found that an increase of up to 12 times in the Nu number was reported when annuli partially filled with porous substrates located either on the inner or the outer cylinder in comparison with the clear annuli.Alkam and Al-Nimr [79] further investigated the thermal performance of a conventional concentric tube heat exchanger by placing porous substrates on both sides of the inner cylinder.Numerical results obtained showed that porous substrates of optimum thicknesses yield the maximum improvement in the heat exchanger performance with moderate increase in the pumping power.This kind of heat transfer enhancers are used in wide range of practical applications including: (a) forced channel flow applications [80][81][82][83][84] and (b) renewable energy applications [82,85].Recent reviews of the subject are available in [74,86]. Mathematical Modelings Darcy Law.Darcy law is one of the earliest flow transport models in porous media.In his experiments on steadystate unidirectional flow in a uniform medium, Darcy [87] revealed proportionality between flow rate and the applied pressure difference.In modern notation, Darcy is expressed by where u, P, μ and K are the Darcy velocity, fluid pressure, dynamic viscosity of the fluid and the permeability of the porous medium, respectively. Brinkman's Equation.As seen from ( 24), Darcy law ignores the boundary effects on the flow.This assumption may not be valid especially when the boundaries of the porous medium are close to each other.Therefore, the following is used as an extension for the Darcy equation for unidirectional flow Equation ( 25) has been referred in the literature [88,89] as "Brinkman equation".The first viscous term on the right is the Darcy term while the second term in the right is analogous to the momentum diffusion term in the Navier Stokes equation with μ as the effective dynamic viscosity of the medium.For isotropic porous medium, Bear and Bachmat [90] showed that the effective viscosity is related to the porosity through the following relation: where ε and T * are the porosity (void-volume-to-thetotal-medium-volume ratio) and tortuosity of the medium, respectively. Generalized Flow Transport Model.In cases where fluid inertia is not negligible, another drag force starts to become significant which is the form drag excreted by the fluid on the solid.Vafai and Tien [91] suggested a generalized model for flow transport in porous media based on Brinkman and Forchheimer's equations.The latter equation takes into considerations the presence of form drags due fluid inertia.This generalized model is summarized in the following equation: where c F and ρ f are the dimensionless form-drag constant and the fluid density, respectively.For packed bed of solid spheres of diameter d p , K and c F are equal to the following: In addition, the previous equation did not neglect flow convective terms as does ( 24) and ( 25), the terms on the left side.Note that the last term on the right represents the form drag term.The previous equation is usually referred as Brinkman-Forcheiner's equation (Table 1). Heat Transfer inside Porous Media.Following the works of Amiri and Vafai [8,92] and Alazmi and Vafai [93] and based on the principle of local thermal nonequilibrium conditions between the fluid and the solid, the energy equations for both the solid and fluid are equal to where T f , T s , u f , k a f , k a s , ε, and h f s are the local fluid averaged temperature, local solid averaged temperature, fluid velocity vector, fluid effective thermal conductivity tensor, solid effective thermal conductivity tensor, porosity of the tensor and interstitial convective heat transfer coefficient.As seen from ( 29), the energy equations for both phases are coupled by the interstitial convective heat transfer between the fluid and the solid.The concept of local thermal nonequilibrium and fluid thermal dispersion are well established in the theory of porous media.Examples on corresponding researches can be found in the works of Amiri and Vafai [8,92] and Alazmi and Vafai [93].In applications involving porous media of small sizes of both the pores and the solid particles and as illustrated in [8,92], Khanafer and Vafai [95] and Marafie and Vafai [96], local thermal equilibrium may serve as a good approximation for the temperature field.In these applications, the solid temperatures as well as fluid temperatures are the same and ( 29) reduces to the following equation: The second term on the left is responsible for heat transfer due to convection.The thermal conductivity tensors for isotropic materials are equal to the following: Heat Transfer Enhancement Using Fluids with Large Particles Suspensions.Huge number of investigations has been carried out in the past in order to seek for developing novel passive methods for enhancing the effective thermal conductivity of the fluid or increasing the convection heat transfer coefficient.One of the methods is introducing into the base liquid high thermally conductive particulate solids such as metals or metal oxides.Examples of these investigations according to Ding et al. [10] are seen in the works of Sohn and Chen [97], Ahuja [98,99] and Hetsroni and Rozenblit [100].These early investigations used suspensions of millimeter or micrometer sized particles.They showed some enhancement.However, they introduced problems to the thermal system such abrasion and channel clogging due to poor suspension stability especially in the case of mini-and/or microchannels.A new passive method developed by Choi [101] which is termed "nanofluids" has shown to resolve some disadvantages associated with the suspensions of large particles. Heat Transfer Enhancement Using Nanofluids 3.4.1.Introduction.Nanofluids are fluids that contain suspensions of nanoparticles of high thermally conductive materials like carbon, metals, and metal oxides into heat transfer fluids to improve the overall thermal conductivity.These nanoparticles are usually of order 100 nm or less.Nanoparticles could be either spherical or cylindrical like carbon multiwalled nanotubes [102].The advantages of properly engineered nanofluids according to Ding et al. [10] include the following: (a) higher thermal conductivities than that predicted by currently available macroscopic models, (b) excellent stability, (c) little penalty due to an increase in pressure drop, and (d) little penalty due to an increase in pipe wall abrasion experienced by suspensions of millimeter or micrometer particles. The enhancements in thermal conductivity of nanofluids are due to the fact that particles surface area to volume ratio increases as the diameter decreases.This effect tends to increase the overall exposed heat transfer surface area for a given concentration of particles as their diameters decreases.Further, the presence of nanoparticles suspensions in fluids tend to increase the mixing effects within the fluid which produce additional increase in the fluid's thermal conductivity due to thermal dispersion effects as discussed by Xuan and Li [122]. Nanofluids possess a large effective thermal conductivity for very low nanoparticles concentrations.For instance, the effective thermal conductivity of ethylene glycol is increased by up to 40% percent higher than that of the base fluid when a 0.3 volumetric percent of copper nanoparticles of mean diameter less than 10 nm are suspended in it [125].This enhancement is expected to be more as the flow speed increases resulting in an increase in the thermal dispersion effect [120]. Lee et al. [105] measured the effective thermal conductivity of Al 2 O 3 and CuO suspended nanoparticles in water and ethylene glycol.They found out that the effective thermal conductivity was enhanced by more than 20% when a 4% volume of CuO/ethylene glycol mixture was used. Ding et al. [10] indicated that Xuan and Li [122] showed that the convection heat transfer coefficient was increased by ∼60% for an aqueous-based nanofluid of 2% Cu nanoparticles by volume, but the nanofluid only had an effective thermal conductivity approximately 12.5% higher than that of the base liquid.Also, they indicated that Wen and Ding [124] observed a ∼47% increase in the convective heat transfer coefficient of aqueous c-alumina nanofluids at x/D ∼ 60 for 1.6 vol.% nanoparticle loading and Re = 1600, which was much greater than that due to the enhancement of thermal conduction (<∼10%).Amazingly, Ding et al. [10] showed that nanofluids containing 0.5 wt.% of carbon nanotubes (CNT) produced enhancement in convection heat transfer which may be over 350% of that of the base liquid at Re = 800, and the maximum enhancement occurs at an axial distance of approximately 110 times the tube diameter.This increase is much greater than that due to the enhancement of thermal conduction (<∼40%).The observed large enhancement of the convective heat transfer coefficient is associated with the following reasons: (e) high aspect ratio of carbon nanotubes. The thermal capacity of nanofluids (ρC p ) nf is equal to where the subscript nf, bf and p denote the nanofluid or the dispersive region, base fluid and the particles, respectively.The parameter φ is the nanoparticles volume fraction which represents the ratio of the nanoparticles volume to the total volume of the nanofluid.A nanofluid composed of pure water and copper nanoparticles suspensions with 2% volume fraction has a value of (ρC p ) nf equal to 99% that for the pure water which is almost the same as the thermal capacity of the pure fluid. Effective Thermal Conductivity of Nanofluids. One of the elementary models of the effective thermal conductivity of nanofluids is that of Xuan and Roetzel [120].They suggested the following mathematical model for the effective thermal conductivity of the nanofluid, k nf where C * is a constant depending on the diameter of the nanoparticle and its surface geometry.The constant (k nf ) o represents the effective thermal conductivity of the nanofluid under stagnant conditions where the bulk velocity u is equal to zero.This constant is proposed by Xuan and Roetzel [120] to be equal to that predicted from the Maxwell model for effective thermal conductivity of solid-liquid mixtures for micro or millimeter sized particles suspended in base fluids [126].It has the following form: where k p and k bf are the thermal conductivity of the nanoparticles and the base fluid, respectively.According to formula (34), 2.0% volume fraction of copper particles produces 8.0% increase in (k nf ) o when compared to the thermal conductivity of the pure fluid.However, Das et al. [114] demonstrated that the 1% particle volumetric concentration of CuO/water nanofluids, the thermal conductivity ratio increased from 6.5% to 29% over a temperature range of 21-51 • C. Liu et al. [127] synthesized a copper nanofluid which showed thermal conductivity enhancement of 23.8% for 0.1% volumetric concentration of copper particles in water.Therefore, new studies have been implemented for seeking new models for the effective thermal conductivity of nanofluids. Of these studies is the study of Yu and Choi [128].They proposed a renovated Maxwell model which includes the effect of the effect of the nanolayer surrounding nanoparticles.They found that this nanolayer has a major role on the effective thermal conductivity of nanofluids when the particle diameter of less than 10 nm.The effects of the surface adsorption of nanoparticles on the thermal conductivity of the nanofluid were modelled by Wang et al. [110].They showed a good agreement between the model and their experiment for 50 nm CuO/deionized water of dilute concentration (<0.5%).Koo and Kleinstreuer [129,130] presented a thermal conductivity model for nanofluids comprising a static part and a dynamic part due to the Brownian motion of nanoparticles.The thermal conductivity model of nanofluids developed by Hamilton and Crosser [131] and the Bruggemen [132] model was found to be different by the thermal conductivity measurement data of Murshed et al. [133] by about 17% for a 5% particle volumetric concentration. Researches have also took in their account the role of the effective thermal conductivity of the interfacial shell between the nanoparticle and the base fluid as can be seen in the work of Xue and Xu [134].The effective thermal conductivity model of nanofluids proposed by Chon et al. [135] was developed as a function of Prandtl number, particle Reynolds number based on the Brownian velocity, thermal conductivity of the particle and base fluid, volume fraction and particle size.Moreover, Prasher et al. [136] presented a Brownian motion based convective-conductive model for the effective thermal conductivity of nanofluids.The nanofluid thermal conductivity developed model of Jang and Choi [137] took into account the collision between base fluid molecules, thermal diffusion of nanoparticles in fluids, collision between nanoparticles and nanoconvection due to Brownian motion.A comprehensive review of experimental and theoretical investigations on the thermal conductivity of nanofluids by various researchers was compiled by Wang and Mujumdar [138] and Vajjha and Das [139].The previously developed models are presented in the works of Vajjha and Das [139]. It should be mentioned that Buongiorno [140] considered seven slip mechanisms that can produce a relative velocity between the nanoparticles and the base fluid: inertia, Brownian diffusion, thermophoresis, diffusiophoresis, Magnus effect, fluid drainage, and gravity.Of all of these International Journal of Chemical Engineering mechanisms, only Brownian diffusion and thermophoresis were found to be important.Buongiorno's [140] analysis consisted of a two-component equilibrium model for mass, momentum, and heat transport in nanofluids.He found that a nondimensional analysis of the equations implied that energy transfer by nanoparticle dispersion is negligible and cannot explain the abnormal heat transfer coefficient increases.That is the second term of (33) on the right side is negligible.Buongiorno suggests that the boundary layer has different properties because of the effect of temperature and thermophoresis.The viscosity may be decreasing in the boundary layer, which would lead to heat transfer enhancement according to the analysis of Buongiorno [140]. Although Buongiorno [140] found out that thermal dispersion mechanism for enhancing heat transfer in convective conditions is negligible, many researches still adopt the fact that it is a major mechanism.Example of these works is the recent work of Mokmeli and Saffar-Avval [141]. Recently, Vajjha and Das [139] developed a model for thermal conductivity of three nanofluids containing aluminum oxide, copper oxide, and zinc oxide nanoparticles dispersed in a base fluid of 60 : 40 (by mass) ethylene glycol and water mixture.The developed is a refinement of an existing model, which incorporates the classical Maxwell model and the Brownian motion effect to account for the thermal conductivity of nanofluids as a function of temperature, particle volumetric concentration, the properties of nanoparticles, and the base fluid.The developed model agrees well with the experimental data.The several existing models for thermal conductivity were compared with the experimental data obtained from these nanofluids, and they do not exhibit good agreement except for the model developed by Koo and Kleinstreuer [129].[139] Effective Thermal Conductivity Model for Nanofluids.The model of Vajjha and Das for the thermal conductivity model of nanofluids is presented in the following equation: Vajjha and Das where κ is the Boltzmann constant (κ = 1.381 × 10 −23 J/K), T o is a reference temperature (T o = 273 K), (C p ) bf is the specific heat of the base fluid, ρ bf is the base fluid density, ρ p is the density of the nanoparticle and d p is the nanoparticle diameter.Note that the base fluid is 60 : 40-by massethylene glycol and water mixture (Table 2). Summary of Literature on Nanofluids. All of the research on heat transfer in nanofluids reported increases in heat transfer due to the addition of nanoparticles in the base fluid.To what degree and by what mechanism is still debatable.However, the following trends were in general agreement with all researchers [142]. (i) There is an enhancement in the heat transfer coefficient with increasing Reynolds number. (ii) The heat transfer coefficient enhancement increases with decreasing nanoparticle size. (iii) The heat transfer coefficient enhancement increases with increasing fluid temperature (more than just the base fluid alone). (iv) The heat transfer coefficient enhancement increases with increasing nanoparticle volume fraction. Some nanofluid researches conflict.Below are some explanations as to why there might be such a discrepancy between results [142]. A. Aggregation.It has been shown that nanoparticles tend to aggregate quite quickly in nanofluids, which can impact the thermal conductivity and the viscosity of the nanofluid.Not all researchers account for this whether it is through experimental or numerical research. B. Unknown Nanoparticle Size Distribution. Researchers rarely report the size distribution of nanoparticles or aggregates-they only list one nanoparticle size-which could affect results.Many researchers do not measure the nanoparticles themselves, and rely on the manufacturer to report this information. C. Differences in Theory.Researchers have not agreed upon which heat transfer mechanisms are important, dominate, and how they should be accounted for in calculations.The discrepancy leads to different analyses and different results. D. Different Nanofluid Preparation Techniques. Depending on how the nanofluids are made, for instance whether it is by a one-step of two-step method, the dispersion of the nanofluids could be effected.Some researchers coat the nanoparticles to inhibit agglomeration, while others do not. Heat Transfer Enhancement Using Phase-Change Devices. A heat pipe is an efficient compact device with a simple structure and no moving parts that allows the transfer of a large amount of heat from various engineering systems through a small surface area.It basically consists of a duct closed at both ends whose inside wall is covered with a layer of a porous wicking material saturated with the liquid phase of the working fluid while the vapor phase fills the central core of the duct.Heat is transferred from one end (the evaporator) of the pipe to the other (the condenser) by evaporation from the wick at the evaporator, flow of vapor through the core to the condenser, condensation on the wick in the condenser, and return flow of liquid by capillary action in the wick back to the evaporator.If the condenser is above the evaporator, then the liquid returns under gravity to the evaporator and the need for a wick can be avoided.A heat source and a heat sink usually differing by a small temperature difference are present at the ends of the heat pipe.The heat pipe performance depends on the flow rates of the vapor and liquid, generally requiring the pressure gradient in the vapor to be negative, and positive in the liquid as provided by the self-pumping ability of the wick material. The heat transfer capability of a heat pipe is mainly related to the transport properties of selected working fluid, system operating pressure, and wick porosity characteristics.Since the earliest theoretical analysis of heat pipes was presented by Cotter [143], a lot of work of research has been conducted on heat pipes.Vasiliev [144] has reviewed and listed the heat pipe R&D work on conventional heat pipes, heat pipe panels, loop heat pipes, vapor-dynamic thermosyphons, micro/miniature heat pipes, and sorption heat pipes in different industrial applications.A novel approach is to utilize nanofluids to enhance the capabilities of heat pipes.Shafahi et al. [145] analyzed and modeled the influence of a nanofluid on the thermal performance of a cylindrical heat pipe.The authors reported that the nanoparticles within the liquid enhance the thermal performance of the heat pipe by reducing the thermal resistance while enhancing the maximum heat load it can carry.The existence of an optimum mass concentration for nanoparticles in maximizing the heat transfer limit and that the smaller particles have a more pronounced effect on the temperature gradient along the heat pipe was also established.Recently, Yau and Ahmadzadehtalatapeh [146] conducted a literature review on the application of horizontal heat pipe heat exchangers for air conditioning in tropical climates.Their work focused on the energy saving and dehumidification enhancement aspects of horizontal heat pipe heat exchangers.The related papers were grouped into three main categories and a summary of experimental and theoretical studies was presented.A variation of the heat pipe called the "microheat pipe" (MHP) mostly used in electronic cooling applications was first proposed by Cotter [147].Unlike conventional heat pipes, MHPs do not contain any wick structure, but instead consist of small noncircular (usually triangular) channels instead; the sharp-angled corner regions in these noncircular tubes serve as liquid return arteries.Microfluid flow channels in MHP have hydraulic diameters on the order of 10-500 μm.Smaller flow channels in MHP are desirable in order to achieve higher heat transfer coefficients and higher heat transfer surface area per unit flow volume.MHPs are also capable of removing large amounts of heat with the possibility of achieving extremely high heat fluxes near 1000 W/cm 2 .Much research has been carried out in recent years to predict the performance of MHPs as evident in the excellent review papers of Vasiliev [148] and Sobhan et al. [149] on MHP research and development work. Phase-change is also used in many other applications to enhance heat transfer as illustrated in the work of Khadrawi and Al-Nimr [150], who proposed and then analytically investigated a novel technique for the cooling of intermittently working internal combustion engines.This technique utilizes a phase change material, which absorbs heat to melt itself, and thus cools the engine as it runs, while the same phase change material releases heat upon restarting of the engine.The main findings of this work show that as the melting temperature and the enthalpy of melting increases the operational time of the system increases, especially for large values of melting temperature.The surface temperature when using a phase change material is much lower than that of conventional air cooling case. 3.6.Heat Transfer Enhancement Using Flexible Seals.Single layered (SL) and double layered (DL) microchannels supported by flexible seals are analyzed in the woks of Vafai and Khaled [151].In their work, they related the deformation of the supporting seals to the average internal pressure by the theory of elasticity.This relation is coupled with the momentum equation which is solved numerically using an iterative implicit finite difference method.After solving these equations, they solved the energy equation.For the same flexible seals, they showed numerically that a flow that cause an expansion in the microchannel height by a factor of 1.5 causes a drop in the average surface temperature for SL by 53% times its value for rigid SL at the same pressure drop under the same constant heat flux.They showed that cooling effect due to hydrodynamic expansion increases as Prandtl number decreases.Further, their results show that SL flexible microchannel heat sinks mostly provide better cooling attributes compared to DL flexible microchannel heat sinks delivering the same coolant flow rate and having the same flexible seals.However, they showed that rigid DL microchannel heat sinks provides better cooling than rigid SL microchannel heat sinks when operated at the same pressure drop.Finally, they concluded that SL flexible microchannel heat sinks are preferred to be used for large pressure drop applications while DL flexible microchannel heat sinks are preferred to be utilized for applications involving low pressure drops along with stiff seals.Later on, Khaled [13] found out that the average temperature of the heated plate decreases as seals number (F n ) increases until F n reaches an optimum after which this temperature starts to increase with an increase in F n . International Journal of Chemical Engineering 3.6.1.Models for Heat Transfer Correlations.Khaled [152] later analyzed analytically two dimensional flexible thin film channel that has a small and variable height h compared to its length B. The x-axis is taken along the coolant flow direction while y-axis is taken along its height.The width of the thin film channel, D, is assumed to be large enough such that twodimensional flow between the plates can be assumed.The height of the flexible thin film channel is considered to have the following generic form: where n, h e and h i are the power-law index, exit, and inlet heights, respectively.When n = 1.0, the inclination angle of the upper plate is uniform.As such, formula (36) with n = 1.0 models the height profile for flexible thin film channels having inflexible plates.However, each differential element of the upper plate will have a different slope when n / = 1.0.As such, formula (36) models height distributions of flexible thin film channels having flexible upper plates and fixed lower plates when n / = 1.0.The case when n = 0.25 mathematically represents the height profile when the upper plate stiffness is negligible compared to the seals stiffness.Recall that the seals stiffness is the applied tension force on the seal that is required to produce 1 m elongation in its thickness. Khaled [152] showed that the maximum reduction in dimensionless heated plate temperature (θ W ) AVG associated with case n = 0.25 is 55% less than that for case with n = 1.0 (when H i = h i /h e = 3.0).In addition, the maximum increase in wall shear stress for case with n = 0.25 is 44% greater than that for case with n = 1.0.The former percentage is greater than the latter one which again demonstrates the superiority of flexible thin film channels with flexible plates over those with inflexible plates.Finally, Khaled [13] developed a correlation for design purposes that relates (θ W ) AVG to H i , n, and Peε for the following range: 1.0 < H i < 3.0, 1.0 < Peε < 50 and 0.1 < n < 2.0. The maximum percentage error between the results of the correlation and the numerical results is about 10%.Where Note that u o is the average flow speed at exit.Note that Khaled [13] considered flexible thin films with heated lower plate with constant heat flux (q ) and insulated upper plate. Heat Transfer Enhancement Using Flexible Complex Seals.Khaled and Vafai [12] have also demonstrated that significant cooling inside flexible thin films including flexible microchannel heat sinks can be achieved if the supporting seals contain closed cavities which are in contact with the heated surface.They referred to this kind of sealing assembly as "flexible complex seals".For example, their results showed that expansion of microchannels of magnitude of 1.26 the initial height can cause a drop in the temperature of 16% the initial average heated plate temperature.Later on, Khaled [13] showed that the average temperature of the heated plate always decreases as thermal expansion parameter (F T ) increases if thermal entry region is to be considered.While, it decreases as F T increases until F T reaches a critical value after which that temperature starts to increase for developed flows.Further, he showed that the decrease in the heated plate temperature is significant at lower values of Re, Pr and microchannel aspect ratio., He considered a fixed lower plate of the thin film while the upper plate is flexible and separated from the lower plate by soft complex seals that allow a local expansion in the thin film heights due to both changes in internal pressure and the lower (heated) plate temperature.Similar effects are expected when the upper plate is a bimaterial plate separated from the lower plate via soft seals.He assumed that the thin film height varies linearly with local pressure and local lower plate temperature according to the following relationship: where F T1 is the thermal expansion parameter which is equal to The coefficient β is thermal expansion coefficient of the flexible complex seals.The parameter F T1 increases as the heating load q, the thermal expansion coefficient β and the reference thin film height increase while it decreases as the fluid thermal conductivity k decreases.The stiffness parameter S 1 is related to the elastic properties of the flexible complex seals.(Nu) AVG is defined as 3.8.Heat Transfer Enhancement Using Vortex Generators.Acharya et al. [155] conducted experiments using internally ribbed channel with cylindrical vortex generators placed above the ribs.They studied the effect of the spacing between the vortex generators and the ribs.They found that the heat and mass transfer depend on both the generator-rib spacing to rib height (s/e) ratio and the Reynolds number.They showed that at low Reynolds number (Re = 5000), the heat transfer enhancement was observed for all s/e ratios.However, at high Reynolds number (Re = 30, 000), the enhancement was observed only for the largest s/e ratio (s/e = 1.5).For this ratio, the generator wakes and rib shear layer interact with each other and promote mixing and thus, enhance heat transfer.For the smallest s/e ratio (s/e = 0.55), due to the smaller gap between the generator-ribs, at high Reynolds numbers the ribs act as a single element and prevent the redevelopment of the shear layer causing reduced heat transfer Lin and Jang [156] numerically studied the performance of a wave-type vortex generator installed in a fin-tube heat exchanger.They found that an increase in the length or height of the vortex generator increases the heat transfer, as well as the friction losses.They reported up to 120% increase in the heat transfer coefficient at a maximum area reduction of 20%, accompanied by a 48% increase in the friction factor.Tiwari et al. [157] numerically simulated the effect of the delta winglet type vortex generator on the flow and heat transfer in a rectangular duct with a built-in circular tube.They observed that the vortices induced by the vortex generator resulted in an increase in the span-averaged Nusselt number at the trailing edge of the vortex generator by a factor of 2.5 and the heat transfer enhancement of 230% in the near wake region. Dupont et al. [158] investigated the flow in an industrial plate-fin heat exchanger with periodically arranged vortex generators for a range of Reynolds number varying from 1000 to 5000.They found that the vortex intensity increases with the Reynolds number. O'Brien et al. [159] conducted experimental study in a narrow rectangular duct fitted with an elliptical tube inside a fin tube heat exchanger, for a range of Reynolds number varying from 500 to 6300.A pair of delta winglets was used as the vortex generator.They estimated the local surface heat transfer coefficient and pressure drop.They found that the addition of a single winglet pair could increase the heat transfer by 38%.They also found that the increase in the friction factor due to the addition of a winglet pair was less than 10% over the range of Reynolds numbers studied. Tsay et al. [160] numerically investigated the heat transfer enhancement due to a vertical baffle in a backward-facing step flow channel.The effect of the baffle height, thickness, and the distance between the baffle and the backward facing step on the flow structure was studied in detail for a range of Reynolds number varying from 100 to 500.They found that an introduction of a baffle into the flow could increase the average Nusselt number by 190%.They also observed that the flow conditions and heat transfer characteristics are strong function of the baffle position Joardar and Jacobi [161] carried out experimental investigations to evaluate the effectiveness of delta-wing type vortex generators by full-scale wind-tunnel testing of a compact heat exchanger typical to those used in automotive systems.The mechanisms important to vortex enhancement methods are discussed, and a basis for selecting a deltawing design as a vortex generator is established.The heat transfer and pressure drop performance are assessed at full scale under both dry-and wet-surface conditions for a louvered-fin baseline and for a vortex-enhanced louveredfin heat exchanger.An average heat transfer increase over the baseline case of 21% for dry conditions and 23.4% for wet conditions was achieved with a pressure drop penalty smaller than 7%.Vortex generation is proven to provide an improved thermal-hydraulic performance in compact heat exchangers for automotive systems. Heat transfer enhancement in a heat exchanger tube by installing a baffle is reported by Nasiruddin and Siddiqui [162].They conducted a detailed numerical investigation of the vortex generator design and its impact on the heat transfer enhancement in a heat exchanger tube.The effect of baffle size and orientation on the heat transfer enhancement was studied in detail.Three different baffle arrangements were considered.The results show that for the vertical baffle, an increase in the baffle height causes a substantial increase in the Nusselt number but the pressure loss is also very significant.For the inclined baffles, the results show that the Nusselt number enhancement is almost independent of the baffle inclination angle, with the maximum and average Nusselt number 120% and 70% higher than that for the case of no baffle, respectively.For a given baffle geometry, the Nusselt number enhancement is increased by more than a factor of two as the Reynolds number decreased from 20, 000 to 5000.Simulations were conducted by introducing another baffle to enhance heat transfer.The results show that the average Nusselt number for the two baffles case is 20% higher than the one baffle case and 82% higher than the no baffle case.The above results suggest that a significant heat transfer enhancement in a heat exchanger tube can be achieved by introducing a baffle inclined towards the downstream side, with the minimum pressure loss.Delta winglets are known to induce the formation of stream wise vortices and increase heat transfer between a working fluid and the surface on which the winglets are placed.Lawson and Thole [163] employed delta winglets to augment heat transfer on the tube surface of louvered fin heat exchangers.It is shown that delta winglets placed on louvered fins produce augmentations in heat transfer along the tube wall as high as 47% with a corresponding increase of 19% in pressure losses.Manufacturing constraints are considered in this study, whereby piercings in the louvered fins resulting from stamping the winglets into the louvered fins are simulated.Comparisons of measured heat transfer coefficients with and without piercings indicate that piercings reduce average heat transfer augmentations, but significant increases still occur with respect to no winglets present. Air-side heat transfer and friction characteristics of five kinds of fin-and-tube heat exchangers, with the number of tube rows (N = 12) and the diameter of tubes (D o = 18 mm), have been experimentally investigated by Tang et al. [164].The test samples consist of five types of fin configurations: crimped spiral fin, plain fin, slit fin, fin with delta-wing longitudinal vortex generators (VGs), and mixed fin with front 6-row vortex-generator fin and rear 6-row slit fin.The heat transfer and friction factor correlations for different types of heat exchangers were obtained with the Reynolds numbers ranging from 4000 to 10000.It was found that crimped spiral fin provides higher heat transfer and pressure drop than the other four fins.The air-side performance of heat exchangers with the above five fins has been evaluated under three sets of criteria and it was shown that the heat exchanger with mixed fin (front vortex-generator fin and rear slit fin) has better performance than that with fin with delta-wing vortex generators, and the slit fin offers best heat transfer performance at high Reynolds numbers.Based on the correlations of numerical data, Genetic Algorithm optimization was carried out, and the optimization results indicated that the increase of VG attack angle or length, or decrease of VG height may enhance the performance of vortex-generator fin.The heat transfer performances for optimized vortex-generator fin and slit fin at hand have been compared with numerical method. A systematic numerical study of the effects of heat transfer and pressure drop produced by vortex promoters of various shapes in a 2D, laminar flow in a microchannel have been presented by Meis et al. [168].The liquid is assumed to be water, with temperature dependent viscosity and thermal conductivity.It is intended to obtain useful design criteria of microcooling systems, taking into account that practical solutions should be both thermally efficient and not expensive in terms of the pumping power.Three reference cross sections, namely circular/elliptical, rectangular, and triangular, at various aspect ratios are considered.The effect of the blockage ratio, the Reynolds number, and the relative position and orientation of the obstacle are also studied.Some design guidelines based on two figures of merit (related to thermal efficiency and pressure drop, respectively), which could be used in an engineering environment are provided Sheik Ismail et al. [169] have presented a review of research and developments of compact offset and wavy platefin heat exchangers.The review has been summarized under three major sections.They are offset fin characteristics, wavy fin characteristics and nonuniformity of the inlet fluid flow.The various research aspects relating to internal single phase flow studied in offset and wavy fins by the researchers are compared and summarized.Further, the works done on the nonuniformity of this fluid flow at the inlet of the compact heat exchangers are addressed and the methods available to minimize these effects are compared. Models for Heat Transfer and Friction Factor Correlations. Recently, Eiamsa-ard et al. [170] have experimentally investigated the heat transfer, flow friction and thermal performance factor characteristics in a tube fitted with deltawinglet twisted tape, using water as working fluid.Influences of the oblique delta-winglet twisted tape (O-DWT) and straight delta-winglet twisted tape (S-DWT) arrangements are also described.The experiments are conducted using the tapes with three twist ratios (y/w = 3, 4 and 5) and three depth of wing cut ratios (DR = d/w = 0.11, 0.21 and 0.32) over a Reynolds number (Re) range of 3000-27, 000 in a uniform wall heat flux tube.Note that d, y, and w are the depth of wing cut, the twisted tape pitch and the tape width, respectively.The obtained results show that mean Nusselt number and mean friction factor in the tube with the deltawinglet twisted tape increase with decreasing twisted ratio (y/w) and increasing depth of wing cut ratio (DR).It is also observed that the O-DWT is more effective turbulator giving higher heat transfer coefficient than the S-DWT.Over the range considered, Nusselt number, friction factor and thermal performance factor in a tube with the O-DWT are, respectively, 1.04 to 1.64, 1.09 to 1.95, and 1.05 to 1.13 times of those in the tube with typical twisted tape (TT). Empirical correlations for Nusselt number (Nu), friction factor (f ), thermal performance factor (η) are developed for the tube with delta-winglet twisted tape inserts in the range of Re between 3000 and 27,000, Pr = 4.91-5.57,twist ratio (y/w = 3, 4 and 5), and depth of wing cut ratios (DR = d/w = 0.11, 0.21 and 0.32) as follows.The predicted data are within ±10% for Nusselt number and ±10% for friction factor.They are the following for oblique deltawinglet twisted tapes: 3.9.Heat Transfer Enhancement Using Protrusions.The effect of repeated horizontal protrusions on the free-convection heat transfer in a vertical, asymmetrically heated, channel has been experimentally investigated by Tanda [171].The protrusions have a square section and are made of a lowthermal-conductivity material.Experiments were conducted by varying the number of the protrusions over the heated surface (whose height was held fixed) and the aspect ratio of the channel.The convective fluid was air and the wallto-ambient air temperature difference was set equal to 45 K.The local heat transfer coefficient was obtained by means of the schlieren optical technique.The protrusions were found to significantly alter the heat transfer distribution along the heated surface of the channel, especially in the vicinity of each obstacle.For the ranges of parameters studied, the addition of low-conductivity protrusions leads to a decrease in the average heat transfer coefficient, as compared to that for the smooth surface, in the 0-7% range for the largest channel aspect ratio and in the 18-43% for the smallest channel aspect ratio.Saidi and Sundén [172] have conducted a numerical analysis of the instantaneous flow and heat transfer has been carried out for offset strip fin geometries in selfsustained oscillatory flow.The analysis is based on the twodimensional solution of the governing equations of the fluid flow and heat transfer with the aid of appropriate computational fluid dynamics methods.Unsteady calculations have been carried out.The obtained time-dependent results are compared with previous numerical and experimental results in terms of mean values, as well as oscillation characteristics.The mechanisms of heat transfer enhancement are discussed and it has been shown that the fluctuating temperature and velocity second moments exhibit non-zero values over the fins.The creation processes of the temperature and velocity fluctuations have been studied and the dissimilarity between these has been proved. Jubran et al. [173] investigated experimentally the effects of rectangular and noncubical obstacles of various lengths, widths and heights on pressure drop and heat transfer enhancements.They found that changes in obstacle size or shape can lead to Nusselt number increases as height as 40%.Sparrow et al. [165,166] found out that mass transfer enhancements of up to 100% can be obtained using perturbations of uniform arrays of square obstacles.An extensive investigation of the fluid and heat transfer in a parallel plate channel with a solid conducting obstacle is conducted by Young and Vafai [154].The rectangular obstacle was found to change the parabolic velocity field significantly resulting in recirculation zones both up-and downstream and a thermal boundary layer along the top face.Their results show that the shape and material of the obstacle have significant effects on the fluid flow and heat transfer. 3.9.1.Models for Heat Transfer Correlations.Young and Vafai [154] proposed Correlation for the obstacle mean Nusselt numbers were found to describe the numerical results with mean errors less than 6%.The correlations have the following functional form: where h m , H, k s , k f , and w are the average convection heat transfer coefficient over the whole obstacle surface, channel height, obstacle thermal conductivity, fluid thermal conductivity, and the obstacle length along the flow direction, respectively.The constants a, b, c and d are found from Table 3 along with the range of the parameters where the inlet length before the obstacle is long enough such that the flow before the obstacle is fully developed.The length of the channel after the obstacle is considered long enough such that the recirculation zones down stream after the obstacle reattached well ahead before the channel outlet.Note that u m is the mean flow velocity inside the channel.The correlation is based on insulated lower and upper channel boundaries and that the heat flux at the lower obstacle surface is constant (q ) where it is equal to q w = h m (2h + w)(T w − T e ).In another work, Young and Vafai [174] performed a comprehensive numerical investigation of fluid and thermal transport within two-dimensional channel containing large arrays of heated obstacles.They found that widely spaced obstacles can effectively transfer thermal energy into the fluid.They studied the effect of the periodicity of the obstacles on heat transfer by doubling the number of obstacles and evaluating the mean Nusslet numbers.The mean Nusselt number was found to reach 5% and 10% difference levels, referenced to ninth obstacle, at the eighth and the seventh obstacles, respectively.The case with porous inserts have been discussed in the works of Alkam et al. [175]. Heat Transfer Enhancement Using Ultra High Thermal Conductivity Composite Materials.Composite materials have Fins inside tubes 2.0, [37] Microfins inside tubes 4.0 for laminar, [70] 1.4 for turbulent, [70] Porous media ≈ 12.0(k eff /k f ), [78] Nanofluids 3.5, [10] Flexible seals 2.0, [151] Flexible complex seals 3.0, [153] Vortex generators 2.5, [157] Protrusions 2.0, [165,166] Ultra high thermal conductivity composite materials 6, [167] been used primarily for structural applications.However, they have been found to be useful for heat dissipation especially in electronic devices.An example of such these material is the metal matrix composite (MMC).Typical MMCs that includes aluminum and copper matrix composites do not show substantial improvements in thermal conductivity except when reinforcing agent of vapor grown carbon fiber (VGCF) is used as shown in the work of Ting and Lake [176].For example, VGCF-reinforced aluminum matrix composite exhibits a thermal conductivity that can be 642 W/mK with a density of 2440 kg/m 3 using 36.5% of VGCF.However, all MMCs are electrically conductive.Chen and Teng [167] have shown that VGCF mat reinforced epoxy composites can have thermal conductivities larger 695 W/mK with density of 1480 kg/m 3 in addition of having an electrically insulating surface.This is with a reinforcement of 56% by volume of heat treated VGCF.Recently, Naito et al. [177] have shown that grafting of high thermal conductivity carbon nanotubes (CNTs) is very effective in improving the thermal conductivity of certain types of carbon fibers, which can reach to 47% improvement. Conclusions In this paper, the following heat transfer enhancers are described and reviewed: (a) extended surfaces including fins and microfins, (b) porous media, (c) large particles suspensions, (d) nanofluids, (e) phase-change devices, (f) flexible seals, (g) flexible complex seals, (h) vortex generators, (i) protrusions, and (j) ultra high thermal conductivity composite materials.Different research works about each one have been reviewed and many methods that assist their enhancement effects have been extracted from the literature.Among of these methods presented in the literature are using joint-fins, fin roots, fin net works, biconvections, permeable fins, porous fins, helical microfins, and using complicated designs of twisted tapes.It was concluded that more attention should be made towards single phase heat transfer augmented with microfins in order to alleviate the disagreements between the works of the different authors.Also, it was found that additional attention should be made towards uncovering the main mechanisms of heat transfer enhancements due to the presence of nanofluids.Moreover, we concluded that perhaps the successful modeling of flow and heat transfer inside porous media, which is a wellrecognized passive enhancement method, could help in well discovering the mechanism of heat transfer enhancements due to nanofluids.This is due to some similarities between both media.In addition, it is concluded that noticeable attentions from researchers are required towards further modeling flow and heat transfer inside convective media supported by flexible/flexible-complex seals in order too compute their levels of heat transfer enhancements.Eventually, many recent works related to passive augmentations of heat transfer using vortex generators, protrusions, and ultra high thermal conductivity composite material have been reviewed.Finally, the estimated maximum levels of the heat transfer enhancement due to each enhancer described in this report were presented in Table 4. ) b 1 = 1 . 0605 b 2 = 1.1424 b 3 = −1.6070b 4 = 4.7261 b 5 = 2.3293 b 6 = 0.3782 b 7 = 0.6104 b 8 = 0.4466 (a) enhancement of the thermal conductivity under the static conditions, (b) further enhancement on the thermal conduction under the dynamic conditions (shear induced), (c) reduction of the boundary layer thickness and delay in the boundary layer development, (d) particle re-arrangement due to non-uniform shearrate across the pipe cross-section, and 3. 7 . 1 . Models for Heat Transfer Correlations.Khaled and Vafai [153] generated the following correlation are for average Nusselt number (Nu) AVG and the dimensionless average mean bulk temperature (θ m ) AVG for thin films supported by flexible complex seals with flexible upper plates for the specified range of parameters, 1.0 < S 1 < 10, 1.0 < Peε < 50 and 0 < F T1 < 1. the following for straight delta-winglet twisted tapes: Nu = 0.184Re 0.675 Pr 0.4 y w Table 4 : Estimated the highest recorded heat transfer enhancement level due to each enhancer.Heat Transfer Enhancer TypeHeat transfer due presence of the enhancer Heat transfer in absence of the enhancer
2018-12-17T04:55:58.790Z
2010-09-20T00:00:00.000
{ "year": 2010, "sha1": "b6a4528aaf105a63f2c151eb77afd6658b83167a", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/ijce/2010/106461.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b6a4528aaf105a63f2c151eb77afd6658b83167a", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Chemistry" ] }
257042889
pes2o/s2orc
v3-fos-license
Mechanical properties and biocompatibility of a novel miniscrew made of Zr70Ni16Cu6Al8 bulk metallic glass for orthodontic anchorage The purpose of the present study was to fabricate a miniscrew possible for clinical application using Zr70Ni16Cu6Al8 bulk metallic glass (BMG), which has high mechanical strength, low elastic modulus, and high biocompatibility. First, the elastic moduli of Zr-based metallic glass rods made of Zr55Ni5Cu30Al10, Zr60Ni10Cu20Al10, Zr65Ni10Cu17.5Al7.5, Zr68Ni12Cu12Al8, and Zr70Ni16Cu6Al8 were measured. Zr70Ni16Cu6Al8 had the lowest elastic modulus among them. Then, we fabricated Zr70Ni16Cu6Al8 BMG miniscrews with diameters from 0.9 to 1.3 mm, conducted a torsion test, and implanted them into the alveolar bone of beagle dogs to compare insertion torque, removal torque, Periotest, new bone formation around the miniscrew, and failure rate compared with 1.3 mm diameter Ti-6Al-4 V miniscrew. The Zr70Ni16Cu6Al8 BMG miniscrew exhibited a high torsion torque even if the miniscrew had a small diameter. Zr70Ni16Cu6Al8 BMG miniscrews with a diameter of 1.1 mm or less had higher stability and lower failure rate than 1.3 mm diameter Ti-6Al-4 V miniscrews. Furthermore, the smaller diameter Zr70Ni16Cu6Al8 BMG miniscrew was shown, for the first time, to have a higher success rate and to form more new bone around the miniscrew. These findings suggested the usefulness of our novel small miniscrew made of Zr70Ni16Cu6Al8 BMG for orthodontic anchorage. www.nature.com/scientificreports/ to stress shielding at the joint between the implant and the bone 18 . It has also been suggested that the reason for the loosening is the difference in elastic modulus with the bone 19 although the bone itself is viscoelastic 20 . There are also some problems with the conventional miniscrews for orthodontic anchorage, such as the risk of damage to the adjacent tooth root during implantation [21][22][23] , low strength such as miniscrew breakage 24,25 , miniscrew mobility and failure during treatment 26,27 , and stability only with mature bone 28 . Thus, the mechanical properties of the Ti-6Al-4 V alloy, such as tensile strength and elastic modulus, and new bone formation around the implanted miniscrew are not always sufficient for clinical use. We have previously reported that the proximity of the root and the miniscrew is related to the miniscrew failure rate 27,29 . To avoid contact and proximity of the miniscrew to the root, we could hypothesize that it would be effective to reduce the diameter of the miniscrew implanted into a narrow area of alveolar bone between tooth roots. If it is possible to manufacture a miniscrew made of Zr 70 Ni 16 Cu 6 Al 8 BMG with a smaller diameter and stronger osseointegration than the conventional miniscrew made of titanium alloy, various problems could be improved by higher strength, lower elastic modulus, and possible avoidance of proximity to the tooth root. In particular, stress shielding is less likely to occur between a Zr 70 Ni 16 Cu 6 Al 8 BMG miniscrew and the surrounding new bone, and then strong osseointegration could be obtained. The aim of the present study was to investigate the potential clinical application of orthodontic miniscrew with the smaller diameter than the currently used Ti and Ti-alloy miniscrews, using Zr 70 Ni 16 Cu 6 Al 8 bulk metallic glass (BMG), which has high mechanical strength and low elastic modulus, and high biocompatibility. www.nature.com/scientificreports/ Zr 70 Ni 16 Cu 6 Al 8 BMG and the 1.3 mm diameter Ti-6Al-4 V miniscrew showed 6.5 Ncm, and 8.0 Ncm, and 16.5 Ncm, respectively (Fig. 1B). In torsion test on used miniscrew, 0.9 mm, 1.0 mm and 1. Surface condition of miniscrews observed by scanning electron microscope (SEM). SEM images did not show a clear difference in the shape and surface of the miniscrew between the Zr 70 Ni 16 Cu 6 Al 8 BMG miniscrew and the Ti-6Al-4 V miniscrew ( Fig. 2A-J). At 8 weeks post-implantation no major damage was observed on used miniscrews. No obvious damage or deformation was observed on any edge of used miniscrews at high magnification ( Fig. 2K-T). Insertion and removal torque testing. Zr 70 Ni 16 Cu 6 Al 8 BMG miniscrews with diameters of 0.9 mm and 1.0 mm tended to have a slightly lower insertion torque than miniscrews with diameters of 1.1 mm and 1.3 mm, but no significant difference was observed (Fig. 3A). www.nature.com/scientificreports/ In the 200 gf loaded group, the removal torque values of 0.9 mm and 1.0 mm diameter Zr 70 Ni 16 Cu 6 Al 8 BMG miniscrews were significantly higher than that of the 1.3 mm diameter Ti-6Al-4 V miniscrew (Fig. 3B). A similar trend was seen in the non-loaded group, but there was no significant difference. No significant difference in the removal torque values was observed between the non-loaded and 200 gf loaded groups. Stability of Zr 70 Ni 16 Cu 6 Al 8 BMG miniscrews evaluated by Periotest. Periotest values were measured immediately after implantation, and at 2, 4, 6, and 8 weeks post-implantation to evaluate changes in miniscrew stability. The Periotest values of the 1.3 mm diameter Ti-6Al-4 V miniscrew and the 1.3 mm diameter Zr 70 Ni 16 Cu 6 Al 8 BMG miniscrew increased significantly from 0 to 2 weeks post-implantation in non-loaded and loaded groups (Fig. 3C,D). Periotest values of the 1.3 mm Ti-6Al-4 V miniscrew did not change from 4 to 8 weeks post-implantation, but the 1.3 mm diameter Zr 70 Ni 16 Cu 6 Al 8 BMG miniscrew decreased from 2 to 8 weeks post-implantation. The Periotest values of 0.9 mm, 1.0 mm, and 1.1 mm diameter Zr 70 Ni 16 Cu 6 Al 8 BMG miniscrews decreased significantly with time from 0 to 8 weeks post-implantation, indicating that the increased stability of miniscrews less than 1.1 mm in diameter occurred in a time-dependent manner (Fig. 3C,D). At 8 weeks post-implantation, all Zr 70 Ni 16 Cu 6 Al 8 BMG miniscrews showed significantly lower Periotest values than the Ti-6Al-4 V miniscrew (Fig. 3C,D). There was no significant difference between the non-loaded and 200 gf loaded groups from 0 to 8 weeks post-implantation ( The fluorescent microscope images revealed the deposition of newly formed bone around both 1.0 mm diameter Zr 70 Ni 16 Cu 6 Al 8 BMG miniscrew and 1.3 mm diameter Ti-6Al-4 V miniscrew by double labeling with calcein (green) and tetracycline (yellow) (Fig. 4A k, l). Fluorescent double labels around Zr 70 Ni 16 Cu 6 Al 8 BMG miniscrews were more obvious than that of the Ti-6Al-4 V miniscrew (Fig. 4A k, l). The 1.0 mm diameter Zr 70 Ni 16 Cu 6 Al 8 BMG miniscrew showed significantly higher mineral appositional rate (MAR, μm/day; Distance between double labeled/Days (7 days)) and bone formation rate (BFR, μm/day; MAR × (double label surface + 1/2 single label surface,)/bone surface × 100) than those of the 1.3 mm diameter Ti-6Al-4 V miniscrew (Fig. 4D,E). Correlation between failure rate and root proximity or miniscrew diameter and between miniscrew diameter and root proximity. Ninety miniscrews were used to evaluate failure rate, the root proximity, and miniscrew diameter (Table 1). 84.4% of failed miniscrew (27 out of 32 miniscrews) showed root proximity, and 86.2% of successful miniscrew (50 out of 58 miniscrews) showed root non-proximity (Table 1). There was a correlation between miniscrew failure rate and root proximity by chi-squared or Fisher's exact probability test (P = 0.00000002, P < 0.01), indicating that miniscrew failure rate decreased with decreasing root proximity. 96.9% (31 of 32 miniscrews) of failed miniscrew fell out at 2 weeks (25 miniscrews) and 4 weeks (6 miniscrews) post-implantation (Table 2). There was no correlation between loading or non-loading and miniscrew failure rate by chi-squared or Fisher's exact probability test (P = 0.4245) ( Table 2). www.nature.com/scientificreports/ The concentration of each metal was at the same level for the control, the Zr 70 Ni 16 Cu 6 Al 8 BMG miniscrew, and the Ti-6Al-4 V miniscrew at 8 weeks post-implantation (Table 3). There was no significant difference in the metal concentration of titanium, aluminum, vanadium, nickel, copper and zirconium before implantation and 8 weeks after implantation between Zr 70 Ni 16 Cu 6 Al 8 BMG miniscrew group and the Ti-6Al-4 V miniscrew group (Table 3). Discussion The present study was to investigate the potential clinical application of orthodontic miniscrew with the smaller diameter than the currently used Ti and Ti-alloy miniscrews, using Zr 70 Ni 16 Cu 6 Al 8 bulk metallic glass (BMG), which has high mechanical strength and low elastic modulus, and high biocompatibility. We demonstrated that Zr 70 Ni 16 Cu 6 Al 8 BMG can be machined into fine screws with a diameter of 1.3 mm or less, having sufficient mechanical strength, higher stability, and lower failure rate compared to the 1.3 mm diameter Ti-6Al-4 V miniscrew which has been widely used in clinical practice. Furthermore, the smaller diameter Zr 70 Ni 16 Cu 6 Al 8 BMG Table 1. Evaluation of miniscrew failures and root proximity. **P < 0.01 diameter vs failure rate by Pearson's correlation coefficient test. *P < 0.05; diameter vs proximity by Pearson's correlation coefficient test. (12)(13)(14)(15)(16)(17) 13 than Ti-6Al-4 V alloy (100-130 GPa) [8][9][10][11][12] Therefore, we decided to create the prototype miniscrew made of Zr 70 Ni 16 Cu 6 Al 8 BMG which had the smallest elastic modulus, and evaluate its usefulness as a miniscrew for orthodontic anchorage. Recently, Ti miniscrew and Ti alloy miniscrews have been widely used in clinical practice, but these miniscrews have been reported to break during implantation and removal 25,30,31 , and thus their strength is not always sufficient. In the present study, the torsion breaking torque value of the 1.1 mm diameter Zr 70 Ni 16 The initial stability may not be obtained if the implant placement torque is very small, and later stability supported by osseointegration may not be acquired if the implant placement torque is very large 35 . Very tight placement torque can generate a high level stress to result in degeneration of the bone at the implant-tissue interface 36 . In the previous study, we implanted a 1.3 mm diameter Ti-6Al-4 V miniscrew into the human mandible and evaluated the stability of the miniscrew 29 . As a result, good stability was obtained when the insertion torque value was less than 10.0 Ncm, but the failure rate increased when the insertion torque value increased to 10.0 Ncm or more 29 . In the present study, the insertion torque value of each Zr 70 Ni 16 Cu 6 Al 8 BMG miniscrew was less than 10 Ncm when the predrilling was performed using a round bar of a size suitable for the diameter of each miniscrew. Then, no significant difference was observed between insertion torque values of miniscrews with various diameters. The removal torque value was used as one of the indexes to evaluate the degree of osseointegration 37 . In general, removal torque value of a miniscrew increased in proportion to its diameter 38 . However, in the present study, the Zr 70 Ni 16 Cu 6 Al 8 BMG miniscrew with a smaller diameter of 1.0 mm or less showed a significantly larger removal torque value than 1.3 mm diameter Ti-6Al-4 V miniscrew at 8 weeks post-implantation, suggesting that 1.0 mm or less diameter Zr 70 Ni 16 Cu 6 Al 8 BMG miniscrews form stronger osseointegrations than 1.3 mm diameter Ti-6Al-4 V miniscrew. This suggestion was substantiated by bone histomorphometric analysis. At 8 weeks post-implantation, bone histomorphometric parameters such as BIC, BA, MAR and BFR indicated significantly increased osseointegration and new bone formation around the Zr 70 Ni 16 Cu 6 Al 8 BMG miniscrews with 1.3 mm or less in diameter than 1.3 mm Ti-6Al-4 V miniscrew. Furthermore, BIC and BA increased inversely with its diameter, indicating that smaller diameter screws have more osseointegration and bone formation around the miniscrew. Periotest values for dental implants were considered to be insufficient osseointegration if the value was 9 or higher 39 . In the present study, the Zr 70 Ni 16 Cu 6 Al 8 BMG miniscrew of each diameter showed Periotest values of 5.0 or less in both the non-loaded group and the 200 gf loaded group at 8 weeks post-implantation. The Zr 70 Ni 16 Cu 6 Al 8 BMG miniscrews showed significantly lower Periotest values and higher stability than the Ti-6Al-4 V miniscrew at 8 weeks post-implantation. This was also consistent with bone histomorphometric analysis at 8 weeks post-implantation. It suggests great potential for clinical application of Zr 70 Ni 16 Cu 6 Al 8 BMG miniscrew. Chen et al. 21 implanted 72 titanium screws in the mandibles of 6 mongrel adult dogs in contact with the roots of adjacent teeth, and observed, histologically, the roots and periodontium over time. In the process, inflammation was observed in the adjacent area of the tooth root that was not directly damaged, and caused bone resorption and bone remodeling. In the present study, it was suggested that the 1.3 mm diameter Ti-6Al-4 V miniscrew and the 1.3 mm Zr 70 Ni 16 Cu 6 Al 8 BMG miniscrew tended to be close to the adjacent tooth root and then inflammation occurred around there, and that osteoclast activity and bone resorption increased up to 2 weeks post-implantation. These were consistent with increased Periotest values up to 2 weeks post-implantation. On the other hand, it was suggested that miniscrews of 1.1 mm or less were unlikely to be close to the tooth roots and they do not interfere with normal remodeling of the surrounding bone during 2 weeks post-implantation. These are considered to be the mechanisms by which the small-diameter miniscrews are less likely to fall out at 2 weeks post-implantation. Harmankaya et al. 40 implanted titanium miniscrews with several coatings and pure titanium miniscrews on the rat tibia and analyzed the gene expression around the miniscrews. As a result, osteoblast markers such as osteocalcin continued to rise until 28 days after implantation, whereas cathepsin K, which was an osteoclast marker, continued to rise until 7 days after implantation, but then declined. In the present study using beagle dogs, Periotest value of the 1.3 mm diameter Ti-6Al-4 V miniscrew and 1.3 mm diameter Zr 70 Ni 16 Cu 6 Al 8 BMG miniscrew significantly increased at 2 weeks post-implantation in alveolar bone, and most of the screws that fallen out at 2-4 weeks post-implantation. These findings suggested that the initial stability of a 1.3 mm diameter miniscrew was low because osteoclasts were activated due to acute inflammation from bone wound after www.nature.com/scientificreports/ implantation; however, as the osteoclast activity decreased, osteoblast activity with new bone formation continued to increase until 8 weeks, and then the stability of the miniscrew increased in the late phase of bone wound healing. On the other hand, miniscrews with a diameter of 1.1 mm or less showed no increase in Periotest values, and miniscrew failure was low at 2 weeks post-implantation and zero at 4 weeks post-implantation, suggesting that strong inflammation around the small-miniscrew did not occur to the extent that the screw fell off. Watanabe et al. 27 implanted 190 Ti-6Al-4 V miniscrews with a diameter of 1.3 mm in the human mandible and analyzed the relationship between the proximity to the root evaluated by CBCT and the miniscrew failure rate, and suggested that proximity to the root affected miniscrew failure and interfered with the bone remodeling process. In the present study, too, the chi test showed a correlation between the proximity to the tooth root and the miniscrew failure rate. In addition, the 1.3 mm diameter Ti-6Al-4 V miniscrew and the 1.3 mm diameter Zr 70 Ni 16 Cu 6 Al 8 BMG miniscrew showed a failure rate of 50%, while 1.1 mm, 1.0 mm, and 0.9 mm diameter Zr 70 Ni 16 Cu 6 Al 8 BMG miniscrews showed 36.8%, 21.1% and 25.0%, respectively. Pearson's correlation coefficient test also showed a correlation between miniscrew diameter and failure rate. These findings indicated that the failure rate of a miniscrew decreased in proportion to its diameter. Among the constituent elements of the Zr 70 Ni 16 Cu 6 Al 8 BMG miniscrew, Ni was known to be the most common sensitizer [41][42][43][44] . Kinbara et al. 45 investigated the threshold of Ni allergic concentration in mice and found that sensitization required an Ni concentration of 1,296,000 ppb and induction required an Ni concentration of 1296 ppb. Furthermore, it has been reported that the threshold value for allergy induction in humans was about 5000 ppb 41 . Previously, we have found that there is little deposition in organs from the implanted Zr 70 Ni 16 Cu 6 Al 8 BMG miniscrew, which is not toxic as a metallic biomaterial 15 . In the present study, we measured the elution of metals into blood and compared the blood concentration before and 8 weeks after implantation of miniscrews. Blood concentration of Ni detected was 0.61 ± 0.16 μg/g(610 ppb) with the 1.0 mm diameter Zr 70 Ni 16 Cu 6 Al 8 BMG miniscrew and 0.72 ± 0.66 μg/g with the 1.3 mm Ti-6Al-4 V miniscrew, which were lower than the normal human blood concentration (1.3-3.3 μg/g, average 2.3 ± 0.16 μg/g) 46 . Blood concentrations of other metals were also at similar levels before and 8 weeks after implantation in Zr 70 Ni 16 Cu 6 Al 8 BMG and Ti-6Al-4 V miniscrews. These findings suggested that the Zr 70 Ni 16 Cu 6 Al 8 BMG miniscrew, like the Ti-6Al-4 V miniscrew, is a safe biomaterial for clinical use. Zi-based BMG will make a significant contribution as a biomaterial for orthodontic miniscrew and other medical equipment in the future. The sample size was determined based on an article that described criteria for biological reproducibility 47 and other papers performing similar experiments in implant studies with beagle dogs 48,49 . We recognize that the miniscrew sample size in this study is relatively small, however it is considered sufficient to evaluate mechanical properties and biocompatibility of a novel smaller miniscrew made of Zr 70 Ni 16 Cu 6 Al 8 BMG for orthodontic anchorage, which is the endpoint of the present study. Studies with larger sample sizes are needed to confirm our findings, especially a difference in failure rate, even if the significant results obtained in this study are justified. As the n number of miniscrews increases, the reliability of the significant difference in the results increases. On the other hand, there are practical problems in terms of ethics, costs, and time. In this study, the n number of miniscrews was more than 3 in each group of different type of miniscrews. Therefore, there are the limitations to reliability and reproducibility of data with small numbers of n. The future study will be needed. In conclusion, Zr 70 Ni 16 Cu 6 Al 8 BMG had a lower elastic modulus than Ti-6Al-4 V alloy, and the miniscrew made of Zr 70 Ni 16 Cu 6 Al 8 BMG had sufficient strength for clinical use even with a small diameter of 1.1 mm or less. The Zr 70 Ni 16 Cu 6 Al 8 BMG miniscrew with a diameter of 1.1 mm or less had higher stability and a lower failure rate than the 1.3 mm diameter Ti-6Al-4 V miniscrew. Furthermore, the Zr 70 Ni 16 Cu 6 Al 8 BMG miniscrew with a diameter of 1.1 mm or less induced more new bone formation and osseointegration around the miniscrew than the 1.3 mm diameter Ti-6Al-4 V miniscrew. The Zr 70 Ni 16 Cu 6 Al 8 BMG miniscrew, which has a small diameter and has not been clinically applied to date, has been demonstrated for the first time to be useful for orthodontic anchorage. Ethics. Animal experiments were performed in accordance with the Regulations for Animal Experiments and Related Activities at Tohoku University. All animal protocols were approved by the Institutional Animal Care and Use Committee of the Tohoku University Environmental and Safety Committee. We complied with the ARRIVE guidelines (Animal Research: Reporting of In Vivo Experiments). www.nature.com/scientificreports/ Measurement of elastic modulus and torsion test. First, the elastic modulus was measured by a free resonance vibration method (JE-RT; Nihon Techno-Plus Co. Ltd., Osaka, Japan) using three rods made of five types of Zr-based metallic glass. A torsion test was performed on 0.9 mm, 1.0 mm, and 1.1 mm diameter (0.70 mm, 0.80 mm, 0.90 mm inner diameters, respectively) Zr 70 Ni 16 Cu 6 Al 8 BMG miniscrews and a 1.3 mm diameter (1.10 mm inner diameter) Ti-6Al-4 V miniscrew using torsion test equipment (PC torque analyser; Vectrix Corporation, Tokyo, Japan) and a digital torque meter (HP-10; HIOS Inc., Tokyo, Japan). The torsion test was performed inserting a dedicated driver tip into the miniscrew head and leaving half of the miniscrew thread out of the jig. Insertion was performed at a speed of 15 rotations/m. A torsional force was applied until the miniscrew broke, and the torque value at the time of breaking was measured. In addition, torsion torque value of miniscrews implanted for 8 weeks into the mandible of a beagle dog was also measured. Preparation of Zr Animals and surgical procedure. Twelve male beagle dogs, 10 months old and weighing 13.0-16.0 kg, were purchased from Kitayama Labs Co Ltd. (Nagano, Japan) and Japan SLC Inc (Shizuoka, Japan), and bred according to the guidelines of Tohoku University animal experiment regulations. The different types of miniscrews used in the experiment were randomly implanted in each side of the mandibular alveolar bone of 10 beagle dogs, and 2 beagle dogs were implanted with either 1.0 mm diameter Zr 70 Ni 16 Cu 6 Al 8 BMG miniscrews or Ti-6Al-4 V miniscrews for blood samples. Up to 6 miniscrews of different types were randomly implanted in the miniscrew insertion sites in each side of the mandible (Supplementary Fig. S1A). The experimental groups were 4 groups of Zr 70 Ni 16 Cu 6 Al 8 BMG miniscrews with diameters of 0.9 mm, 1.0 mm, 1.1 mm, and 1.3 mm; 1 group of 1.3 mm Ti-6Al-4 V miniscrew; a non-loaded group for each miniscrew; and a 200 gf loaded group for each miniscrew, respectively. All animal experiments were performed under general anesthesia to reduce pain. The dogs were treated with 5 mg/kg Diazepam for preanesthetic medication, 13 mg/kg pentobarbital for anesthesia, and 2 mg/kg xylazine hydrochloride for analgesia. Before inserting miniscrews, dental radiographs of the surgical area were taken to confirm the roots of teeth and the direction of insertion, and the surgical area was sterilized with hydrogen peroxide. After sterilization, local anesthesia was performed with 1.8 ml lidocaine hydrochloride epinephrine/ adrenaline injection (Nipro, Osaka, Japan) and an incision was made to the periosteum on the alveolar bone. A pilot hole was drilled into the cortical bone with a low-speed handpiece (500 rpm) and a round bar under water injection. The diameter of the round bar used for the pilot hole was 0.2 mm smaller than the diameter of each miniscrew. A Zr 70 Ni 16 Cu 6 Al 8 BMG miniscrew or Ti-6Al-4 V miniscrew was inserted in a drilled hole using a manual torque screwdriver (FTD10CN-S; Tohnichi, Tokyo, Japan) at an angle of 45° to the cortical bone surface. To apply a force of 200 gf to the miniscrew, an elastomeric chain (Pro-chain; Dentsply Sirona, Tokyo, Japan) was placed from the miniscrew inserted between the 2nd premolar roots to the miniscrew between the 3rd premolar roots, from the miniscrew inserted between the 3rd and 4th premolars to the miniscrew inserted between the roots of the 4th premolar, and from the miniscrew inserted between the 4th premolar and 1st molar to the miniscrew inserted between the 1st molar roots. The load was measured using a spring scale immediately after implantation; dental radiographs were taken and the root proximity was evaluated. Throughout the experiment, no systemic problems were observed in dogs, such as eating disorders, decreasing body weight or gait disturbance. Scanning electron microscopy (SEM) imaging. SEM (JSM-6390LA, JEOL, Tokyo, Japan) was used to evaluate the surface structure of used and unused miniscrews. Used miniscrews harvested 8 weeks after implantation were kept in 70% ethanol at 4 °C until used for SEM imaging. The surface texture of the miniscrew edge, the edge form, and the distance between pitches were examined at low (30×) and high (300×) magnifications. Insertion and removal torque testing. The peak value of insertion torque of each miniscrew was measured during miniscrew placement into the alveolar bone using a manual torque screwdriver (FTD10CN-S; Tohnichi, Tokyo, Japan). The removal torque value was also measured at 8 weeks after implantation. Peak insertion and removal torque values were recorded in Ncm. Mobility measurement by Periotest. Immediately after the miniscrews were inserted, the mobility of miniscrews was measured at 0 week to evaluate the stability of the miniscrew using the Periotest (Gulden Messtechnik; Bensheim, Germany). The mobility of the miniscrews was also measured at 2, 4, 6 and 8 weeks after implantation. In accordance with the manufacturer's instructions, the tip of the Periotest was applied to the miniscrew head from a distance of 2.0-3.0 mm. Each miniscrew was subjected to Periotest measurements from three directions, approximately 120° apart, on the horizontal plane as reported by Çehreli et al. 52 . The measurement was repeated three times for each direction and average values were determined. Evaluation of root proximity and failure rate. The distance between the miniscrew and root surface was calculated using dental radiographs taken after implantation, and defined as (distance between the tip of the miniscrew in the dental radiograph and the surface of the root) x (diameter of the miniscrew)/(diameter of the miniscrew in the dental radiograph). The dental radiographs were classified into two categories, non-proximity or proximity depending on the distance between the miniscrew tip and root surface (Supplementary Fig. S1B). According to the report of Watanabe et al. 27 , non-proximity was defined as the distance between the miniscrew tip and root surface being more than 0.7 mm, and proximity defined as the distance between the miniscrew tip and root surface being under 0.7 mm Fig. S1B, C). The miniscrew failure rate was calculated and compared with the root proximity and the presence or absence of a 200 gf load. Histomorphometric analysis. Eight weeks after implantation, 4.3 mm internal diameter trephine bars (Dentech, Tokyo, Japan) were used for harvesting the alveolar bone specimens including the miniscrew in 12 beagle dogs. A sequence of fluorochrome labels with calcein green (5 mg/kg body weight) (Dojindo Laboratories, Kumamoto, Japan) and tetracycline (10 mg/kg body weight) (Wako, Osaka, Japan) were injected intravenously at 10 days and 3 days, respectively, before harvesting the specimen at 8 weeks post-implantation. Specimens were immersed in a 4% paraformaldehyde solution for 48 h, dehydrated in an ascending series of ethanol. After dehydration, the specimens were treated with an intermediate agent, xylene, infiltrated with methyl methacrylate (Osteoresin Embedding Kit; Wako, Osaka, Japan) at 4 °C, and embedded at 35 °C. Embedded specimens were sectioned with a Leica Saw Microtome SP1600 (Leica Microsystems, Wetzler, Germany) and stained with Villanueva Osteochrome Bone Stain (Polysciences Inc., Pennsylvania, USA) for bright-field microscopic examination. Thirteen sections with a thickness of 100 μm were prepared every 300 μm from the miniscrew head side of the thread. BIC, BA, MAR and BFR were measured on the 10th, 11th, and 12th sections where the cancellous bone was observed ( Supplementary Fig. S2B). To measure histomorphometric analysis was performed on bone in the 240 μm region from the miniscrew surface using an optical microscope (DP72, Olympus Corporation, Tokyo, Japan) ( Supplementary Fig. S2A). Image J Launcher software (National Institutes of Health, Bethesda, Maryland, USA) was used to measure and convert pixels to micrometers. BIC and BA were calculated as averages of 3 sections. Dynamic assessment of bone formation was determined based on dual calcein-tetracycline labeling using a fluorescence microscope (BZ-9000; Keyence, Osaka, Japan). The distance between two consecutive labels was measured for MAR and BFR ( Supplementary Fig. S2C, D). MAR was measured with four double labels in the 240 μm region from the miniscrew surface at a magnification of 10 times, and MAR and BFR were calculated as an average value of 12 measured values (4 double labels × 3 sections) ( Supplementary Fig. S2C,D). Measurement of metal concentration in beagle dogs' venous blood. Seven Ti-6Al-4 V miniscrews with a diameter of 1.3 mm were implanted in one beagle dog and seven Zr 70 Ni 16 Cu 6 Al 8 BMG miniscrews with a diameter of 1.0 mm were implanted in another beagle dog. Before implantation and 2, 4, 6, and 8 weeks after implantation, a total of 7.0 ml blood was taken from the saphenous vein of the left hind leg of beagle dogs according to the method of Assad et al. 53 Blood was taken 4 times and measured. The blood samples were frozen and stored at − 20 °C until analysis. Concentrations of titanium, zirconium, nickel, copper, aluminum, and vanadium within blood were measured by inductively coupled plasma-mass spectroscopy (ICP-MS) (Agilent 8800; Agilent Technologies, California, USA). Analysis by ICP-MS was performed at an infinitesimal material analysis room in the Tohoku University School of Engineering. Metal concentration was measured with blood collected four times from each beagle dog, and the average value and S.D. were calculated. Metal concentrations without any implant were used as controls. Statistical analysis. Statistical significance was determined by one-way analysis of variance followed by post hoc analysis using Tukey-Kramer's multiple comparison tests. The chi-squared or Fisher's exact probability test were used to examine the correlation between miniscrew failure rate and root proximity. Pearson's correlation coefficient was used to examine the correlation between miniscrew diameter and failure rate, and between miniscrew diameter and root proximity. All data are presented as mean ± standard deviation. Differences were considered statistically significant at *P < 0.05 and **P < 0.01. Data availability The datasets used and analyzed during the current study available from the corresponding author on reasonable request.
2023-02-21T14:54:43.917Z
2023-02-21T00:00:00.000
{ "year": 2023, "sha1": "95953dc02a778fa2348906f08fbe3718116abba0", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "95953dc02a778fa2348906f08fbe3718116abba0", "s2fieldsofstudy": [ "Engineering", "Materials Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }